This seems like a complete improvement (It can read my mind! 🤩), but it actually means that your results are always artificially narrowed, as AI can just not-show you whatever, or tinker/hallucinate results, or fiddle with the order presented, and you might not notice. Even if you tell it precisely what you want to see, it can just ignore you and do as it prefers.

An algorithmic search is less opinionated. It just shows you what you asked for and you narrow/widen the request until the results contain the information you find most-useful. Algorithmic search controls creating a standardized prompt for an LLM semantic search seems like a useful middle-way. We'll see.

Reply to this note

Please Login to reply.

Discussion

For clarification, it will do algorithmic (always) and semantic search (where available), in parallel, and deduplicates the results.

Semantic search will normally return *more* results, but it might be *missing* results and the results might be noisier.

Also, you shouldn't require heavy computation and/or access to a remote machine, to find well-structured information stored locally. That is legit what everyone is currently building and it is retarded.

Another thing algorithmic search allows for is meandering through data sets.

With an LLM, you have this ping-pong discussion, where you refine your request and try again. Or you ask the LLM for a suggestion.

With algorithmic search, you slowly fiddle with the data set's structure (symbolized by the controls) and watch the results mutate in real time. That means that you don't even have to know what you are looking for, when you start out. You can simply peruse the selection, like wandering around in a library and looking through the shelves.