Nostr.land will soon only support LLM queries and all exchanges will be fed through LLMs before being converted to unoptimized SQL

Reply to this note

Please Login to reply.

Discussion

Yeah, I'm designing the component to work well, with a smart relay, or a dumb one.

Most info is found fastest algorithmically, as the search terms are clear, the context/environment is narrow, and the data set is preloaded. I don't need an LLM to find the book "Jane Eyre", in a card-catalog on my machine, where some have the title of "Jane Eyre". As soon as I start typing "j-a-n..." it should just appear.

Smart search will be more interesting, for the wiki page, where the user is looking for information on or near a particular topic, so that semantic search returns *related* results, ordered by relevance. But, even there, you don't have to yap at it for 3 minutes, to find what you need, because we have narrowed the scope through the mere existence of the entry point on a particular, structured page, and we have designed a form for the results to be poured-into. It doesn't need to vomit out an essay.

I guess I'm a traditionalist.

I've noticed that people are abandoning software calculators and asking LLMs to do basic arithmetic for them, now.

ChatGPT, what is 14/2?

They like having one, single, text-based interface, and they just trust the AI to always return precise, accurate, objective, answers.

Which I find incredibly ironic, as they also usually refuse to use anything command-line based, preferring GUIs. They want a GUI that is just one prompt bar, where they can just type "?" and the chatbot spits out the timetables for the Underground on a Wednesday at 21:00.

This seems like a complete improvement (It can read my mind! 🤩), but it actually means that your results are always artificially narrowed, as AI can just not-show you whatever, or tinker/hallucinate results, or fiddle with the order presented, and you might not notice. Even if you tell it precisely what you want to see, it can just ignore you and do as it prefers.

An algorithmic search is less opinionated. It just shows you what you asked for and you narrow/widen the request until the results contain the information you find most-useful. Algorithmic search controls creating a standardized prompt for an LLM semantic search seems like a useful middle-way. We'll see.

For clarification, it will do algorithmic (always) and semantic search (where available), in parallel, and deduplicates the results.

Semantic search will normally return *more* results, but it might be *missing* results and the results might be noisier.

Also, you shouldn't require heavy computation and/or access to a remote machine, to find well-structured information stored locally. That is legit what everyone is currently building and it is retarded.

Another thing algorithmic search allows for is meandering through data sets.

With an LLM, you have this ping-pong discussion, where you refine your request and try again. Or you ask the LLM for a suggestion.

With algorithmic search, you slowly fiddle with the data set's structure (symbolized by the controls) and watch the results mutate in real time. That means that you don't even have to know what you are looking for, when you start out. You can simply peruse the selection, like wandering around in a library and looking through the shelves.