https://www.diabrowser.com/ I thought this was going to be in your MeToo category, but actually it's quite well done, the chat knows where you are, what you're looking at, all sorts of context and it turns out to be quite fun to use.

Yeah, that's cool! It would work well, if you were using Alexandria, as it's browser-based and we have clearly-structured controls and sectioning, that it can use to guess the underlying structure of the data set from the app's DOM.

My point, with Nostr, is why put something like that in Alexandria? Everyone will have something like that in the browser they run Alexandria in, or the mobile phone or tablet, running the (TBD) Alexandria reader app. Any AI within Alexandria should be for going *deeper* into Alexandria using the specific data science knowledge of the people engineering the app and the relays.

Reply to this note

Please Login to reply.

Discussion

Your point makes total sense. But yeah, it is fun.

Well, we'll actually be displaying LLM-generated background info and summaries, and stuff, but it won't be a chatbot. It'll just be fields in the card at the top. (If you click the big white square, it opens the details modal, which will be getting more details about the author, the genre, the historical period, the particular edition or translator, etc.)

Everyone wants to see the same sorts of things, so there's no point expecting them to actively ask for it. Just add a field to the GUI and display stuff they are likely to find interesting.

Also, we'd be doing the computation once, server side, and then storing it for everyone to read. Each user shouldn't need to re-ask for an author bio about Ghandi, every time they look at that book. It should appear.

That also forces their own browser AI, if they use it, to think of things *we haven't already documented*, rather than generic things like country of residence or birth date, or other books they've written. That data is more of a commodity.