I'd say that you are both right. Nostr can assume that there will be n number of entry points, and that some data is more in-demand than others, and which data that is will depend upon the relay's users.

So, databases will have to become smarter about where they store which data and have archiving schemas. People don't mind if something rarely called up is returned more slowly; they're more happy that it is returned at all. But if something popular isn't returned quickly, they will begin to whine about the latency.

I've added return-time display, to our client's search, by the way. Some relays will be setup to return an item fastest, whereas others will be setup to return them most reliably, even when the Internet is generally slower.

Reply to this note

Please Login to reply.

Discussion

and oh look that’s what Noswhere SmartCache is going to do

it can automatically detect popular data and cache it on edge relays

any sort of caching more complex is actually more of a performance hit than benefit

Yes, but I do not think that _every_ relay needs this capability. Different use cases are free to have different architectures, so long as they can communicate according to NIP-01. That's the brilliance of Nostr.

Define the shape of the data and the most-basic way to request it, and keep the implementation open.

exactly... so you define relay groups and they act as caches, and you also need to define limited scopes that each relay is given as an entry point to clients

probably a lot of this can be made dynamic

yes, this is just a matter of building out a query forwarding mechanism into the relays, so they can all keep the hot stuff, and then, they just need a way to expire that shit