embedding-based anything search has been a thing way longer than LLM based search
Discussion
wdym?
tl;dr you feed images to an embedding model, get out embeddings (also known as points in high-dimensional space) and then later use those to search
you train another model to generate embeddings in the same space, but with captions instead of images
you usually take some images and get their embeddings, caption the images and use that to train the model