🔥

Relays like strfry, khatru, realy are great until they are not.

They work great for caching locally but when you get to scale it implodes.

You want to store 1TB of books. You have to rent a single server that can store 1TB.

But what if it goes down? So you buy a few more replicas. Then you try to shard events across servers and fail.

Have fun compacting the database or upgrading it every once in a while.

NFDB fixes this. Just like SQLite is great for small scale, Postgres is better for larger scale.

Reply to this note

Please Login to reply.

Discussion

I'm using them for local caching. Doesn't have to be the only local relay. I currently have 4 local relays running on my laptop and Citrine on mobile. I do a lot of testing, but still. My new 10432 kind that contains all localhost relays means that you can have as many as you want, to do what you want.

I don't think it's appropriate to store everything on the local system; that's actually a risky data strategy. It's for making sure you can transition between online and offline and autosync when you're in a good network. Like OpenDrive and Sharepoint do, but not retarded and dirt-cheap.

I think it's safe to assume that someone handling large or important data stores will have the sense to hire a professional admin or be an admin, themselves, but that's what we have you and nostr:npub10npj3gydmv40m70ehemmal6vsdyfl7tewgvz043g54p0x23y0s8qzztl5h for. Above my pay grade and not my problem.

Good.

For an in-browser use case, use the SQLite in-browser relay I suggested too. You at least have a cache that is better than nothing, compared to no cache until they set up realy or something else.

I'm using indexeddb, as a mandatory cache, since it works on phones. Isn't that one you suggested something that has to be natively installed and _doesn't_ run in the browser? Or did I check the wrong link?

No it’s a web worker that uses OPFS and wasm to run a relay

Hmm... I'll look. What was the link, again?

the whole collecting IDs and comparing before downloading events and then just downloading what is missing, that's what negentropy does, that's why i thjnk its neat having it built into the protocol..

as for uptime and redundancy, this always comes in at least double the cost. obviously it will take a super long time to compact a 1tb db. possibly on the order of days.. but you can run a replica, and still do a zero downtime failover once it is complete as long as you have enough disk space.

ive been spec'ing out some server tiers that could handle it, while also keeping cost in mind. i think having as low a server cost as possible is really important for nostr businesses.

i also like that clients have the distributed mindset here it should help with uptime by decreasing the odds of both relays experiencing unexpected downtime.

badger is better because it has split key/value tables. a lot less wasted time compacting every time values are written, and easier and faster to use key table to store some of the data that has to be scanned a lot.

for whatever stupid reason, nobody else in database development has realised the benefit of the key/value table splitting, even though the tech has been around for 9 years already.

probably similar reasons why so many businesses are stuck with oracle

at least some sane people realized, tbh

FoundationDB’s new Redwood engine, underlying architecture of S3, NFDB’s IA store as well

that's great. i think badger is cuter and more mature tho