tl;dr you feed images to an embedding model, get out embeddings (also known as points in high-dimensional space) and then later use those to search

you train another model to generate embeddings in the same space, but with captions instead of images

you usually take some images and get their embeddings, caption the images and use that to train the model

Reply to this note

Please Login to reply.

Discussion

one example is CLIP and derivatives

OK, machine learning driven anyway