stateless front ends can't cope with a multiplicity of data stores, so theres' that

there is also the fact that users generally cluster around a section of the data set and this can then be easily the shard they are interested in

generic scaling solutions always start with the assumption you only enter the data from one point, that is stupid because there is billions of us humans, we don't need to concern ourselves with the same things all the time

fiat thinking. followers, sheep, i would say "idiots"

Reply to this note

Please Login to reply.

Discussion

I think it's unrealistic to redesign the entire web around the "proper" way to store and interact with data, but the reality is we need something that works with the way things are built now.

well, nothing's stopping you from going and picking up a job at google building their data systems, well?

Their hiring manager stopped me, so there's that XD

probably because you are too creative for them

if you want to work in big tech avoid google or FB or you will be met with either the company that runs on hopes and dreams, or privacy violations

You can’t design users around systems, but you can design systems around users

The worst part is if done right the current state of Nostr can scale a lot. Not ideal (a v2 for event format is possibly needed) but possible

the most unfortunate part is lazy devs assume that large databases are not possible, but oh boy they are wrong

how do any large platform work then? nostr being decentralized does not mean they are much different (relays and the backend are effectively same)

I’m not working on implementing a complex query system or multi-level caching crap because the existing tools can scale to the fucking planet

I'd say that you are both right. Nostr can assume that there will be n number of entry points, and that some data is more in-demand than others, and which data that is will depend upon the relay's users.

So, databases will have to become smarter about where they store which data and have archiving schemas. People don't mind if something rarely called up is returned more slowly; they're more happy that it is returned at all. But if something popular isn't returned quickly, they will begin to whine about the latency.

I've added return-time display, to our client's search, by the way. Some relays will be setup to return an item fastest, whereas others will be setup to return them most reliably, even when the Internet is generally slower.

and oh look that’s what Noswhere SmartCache is going to do

it can automatically detect popular data and cache it on edge relays

any sort of caching more complex is actually more of a performance hit than benefit

Yes, but I do not think that _every_ relay needs this capability. Different use cases are free to have different architectures, so long as they can communicate according to NIP-01. That's the brilliance of Nostr.

Define the shape of the data and the most-basic way to request it, and keep the implementation open.

exactly... so you define relay groups and they act as caches, and you also need to define limited scopes that each relay is given as an entry point to clients

probably a lot of this can be made dynamic

yes, this is just a matter of building out a query forwarding mechanism into the relays, so they can all keep the hot stuff, and then, they just need a way to expire that shit