Yeah, it's the sort of service that saves client devs from having to think through the filters and algorithms.

Or they just use Aedile's topicGraph component. 😉

Reply to this note

Please Login to reply.

Discussion

Are you promising Aedile features? 👀

😁 No pressure.

Something I was thinking about is starting with highly-prepared searches and then expanding iteratively, if they reclick the search button.

Like an LLM does, but with no chat. Just keep looking deeper and broader until you've exhausted the possibilities.

Ooh I do like that. Like the next page of Google, but smarter.

Could have an auto-iterate toggle and there's already a Cancel button, to stop searches underway, and a progress bar. The final stage could be full-text on all active relays or something ridiculous. 😂

We could call that the "Great time to grab a snack and a coffee." Iteration.

I'm trying to corral LLMs into their lane enough in my workflows that I can turn them loose and grab a snack.

😂

that won’t even be necessary though

If they use your server, no, since you do it on the backend, but we promised that Alexandria would be worth running, even with a crappy relay. There's a lot that can be done with normal computer science.

Trying to make it work for crappy search is not worth it

Never sleep. 🤙🏻

SEARCH HARDER, BABY

Yeah, Just one bar, that isn't an LLM, but you can say "longform article from liminal from last week about LLMs" and ta-da!

Semantic search ftw

It's actually not that difficult, but nobody has built it yet and I want to find stuff. I'm so tired of not being able to find "bible KJV" because the search is too retarded to normalize and prepare the filter properly and is like,

Yo, I found no "bible KJV". 🤙🏻

Okaaaay, but you found a "KJV Bible" right? 🤦🏻‍♀️

The worst is when people are like, Just ask am LLM. Ugh. It's like four lines of code, you morons.

I don't want to have to have a full conversation with a robot just to find an article.

☝🏻💯🫂