primal.net is apparently indexing the entirety of Nostr on a cluster of SQLite databases and serving it in microseconds. I'm intrigued by this, so I'm figuring out how to run Julia code to see it for myself. Also brewing some sen-cha. 🍵

Reply to this note

Please Login to reply.

Discussion

The main limitation of SQLite is that it locks up when you try to write too much to it at once. This makes it "not a great use-case for social media". Not even due to your posts, but because of all the likes and emoji reactions you guys do. I still haven't hit any of those limits in my own experience. But writing to multiple database files could help it a lot, I bet.

nostr:npub1mkde3807tcyyp2f98re7n7z0va8979atqkfja7avknvwdjg97vpq6ef0jp is the wizard behind the wizardry of it all

Do you know what the motivation for this over just-using-postgres was? I’m confident you can beat postgres if you’re careful and thoughtful, but I’m also reasonably confident rolling your own system like this is at minimum going to be brittle, and probably have at least as many performance pitfalls at the end of the day.

I've read that sqlite is significantly faster for read-heavy applications.

I just wish they exposed their API in a standard Nostr relay format.

I've been thinking of testing https://www.scylladb.com with #Nostr considering much of the data sent through the protocol is loosely structured. Scylla promises insane performance and Discord devs only have praise for it.

https://discord.com/blog/how-discord-stores-trillions-of-messages

There are couple of millions of events today in entire nostr. Its around 10-20GB of data. Lets add the same for indexes. Any relational database on 64gb machine ($50-$500 a month) will give you microseconds today