Create your own image search (Python)

In this tutorial, we will create an image search where we enter a search term and 10 images, and return the similarities of each image to the search term.


How does it work?

To compare a text term and an image, we will first create two vectors (in JSON/Python a vector is simply an array). This vector will have 512 numbers in it, and is a way to describe the text/image in the context of the AI model.

When you have two vectors, the dot product of the vectors tells you the similarity of the two vectors. If we then rank all the dot products highest to lowest, we know which images best match the keyword.

The code shown in this tutorial is available on GitHub.

Text search term

Let's choose to search for images with a bicycle in them. We create the vector for our search term using the Vision: Embed Text endpoint.

import requests, json
from dotenv import dotenv_values
corcelKey = dotenv_values(".env")['corcel_apikey']

url = ""

payload = {
    "text_prompts": ["bicycle"]
headers = {
    "accept": "application/json",
    "content-type": "application/json",
    "Authorization": corcelKey

response =, json=payload, headers=headers)

jsonresponse = json.loads(response.text)
text_embedding = jsonresponse[0]['text_embeddings'][0]


This code uses the /vision/embed_text endpoint. We are using environmental variables to obfuscate our API Key. To use your API key, create a file in the same directory called ".env" and place "corcel_apikey=" in the file.

The response is converted into JSON, and then the text embedding is extracted into text_embedding.


Let's grab a few images from Corcel for our search:

There are a few that are definitely bicycles - then weird bicycles/things that sort of look like bicycles and a banana.

Let's collect all of these images, base64 encode them, storing the encodings in an array.

import requests
import base64
image_urls = [

b64_images = []

for image in image_urls:
    response = requests.get(image)

    # Check if the request was successful (status code 200)
    if response.status_code == 200:
        # Convert the image content to base64
        base64_image = base64.b64encode(response.content).decode('utf-8')
        base64_image = ""

Create the Vectors

We can send the array of encoded images to Corcel to create the vectors. The vectors are saved in the array image_embedding.

#encode the B64 images into vectors

url = ""

payload = {
    "image_b64s": b64_images
headers = {
    "accept": "application/json",
    "content-type": "application/json",
    "Authorization": corcelKey

response =, json=payload, headers=headers)

jsonresponse = json.loads(response.text)
image_embedding = jsonresponse[0]['image_embeddings']



We can now compare the text_embedding with the base 64 images. This does not require any API calls, we just compute the dot product of the text vector with each of the image vectors. Then we can sort the results from highest to lowest, and display the images with the dot product values

import numpy as np
from IPython.display import display, Image

#create the text vector
textVector = np.array(text_embedding)

#store dot products in ana array
dotProducts = []

#loop through the images, create the vector, do a dot product, and place the solution in the dotProducts array
for imageVector in image_embedding:

    dotProd =, imageVector)

#sort the dot products highest to lowest
sorted_indices = np.argsort(dotProducts)[::-1]

#from highest to lowest, show the dot product (with 3 digits after the decimal), and the image.
for index in sorted_indices:
    print(f"similarity : {dotProducts[index]:.3f}")
    display(Image(url=image_urls[index], width =400))

This displays all of the images. In this case, the order is:

  1. Elephant on a bike
  2. mountain biker
  3. lime tricycle
  4. Steampunk factory
  5. motorcycle
  6. banana

Addendum - modify to search for similar images

You can search for similar images in much the same way as above - you just take the vectors of your search image and compare them to the vectors of the images you'd like to compare to.