I want to try this as well but I am afraid of sqlite wasm be slower than indexeddb because of the worker, but let me know the results.
Discussion
sqlite anything is 100x times faster than indexeddb. I don't think anyone could build a database slower than indexeddb...
That said kierans worker relay package is really great for web apps https://www.npmjs.com/package/@snort/worker-relay
Found this article after digging around a lot https://jlongster.com/future-sql-web
It shows its faster ( for 100k+ entries ) to load a wasm of sqlite into memory and then run it off an in-memory file system than it is to use indexeddb
Have you seen this? nostr:nevent1qqsww099lapu2wauztfg97nv7selgvgalqcxn3xn5k8gq4dh6p5xxfcpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsygpm7rrrljungc6q0tuh5hj7ue863q73qlheu4vywtzwhx42a7j9n5psgqqqqqqsj6gjek
If you end up trying it please let me know how it goes.
No but it looks a lot like another package I made last year :)
https://github.com/hzrd149/nostr-idb
Ill give it a try and I hope its faster, but it wont make me stop hating indexeddb
Everything in browsers is stupid and awful. IndexedDB is implemented on top of SQLite for Firefox, I think, which makes it completely insane.
And on Chrome it's on a pile of JS on top of LevelDB which is already much slower than something like LMDB, but I don't know what I'm talking about.
Apparently the biggest issue (for reads) is that for iterating you have to switch on every row between native code and JS code. That gets much better if you fetch stuff in batches.
Also storing (and reading too, apparently) JS objects is much slower than storing JSON strings, because the act of deep copying objects and then encoding/decoding them is magnitudes slower than the superfast JSON.stringify/parse of browsers.
Wait, so storing strings is faster than objects? Did you test this anywhere and is that the reason your implementation is faster?
Also how many events did you test it with. The most I was able to get with my implementation was 100k
Oh, writes are super slow, I didn't benchmark writes. I guess I didn't even think about that part as I thought only reads were important, so maybe take my claims with a grain of salt.
For reads I think it doesn't matter how many records are stored so I only tried low values <5000 as I thought that was realistic for Nostr web clients.
Have you tried absurd-sql?
Yes, years ago I replaced PouchDB with it, but it was also bad with writes and everything kept lagging. The real fix is to be moderate with the writes and do them slowly over time I guess.
NoSQL database implemented on top of an SQL database is a fantastic engineering joke but it's truly puzzling how it made it into production. I had to manually vacuum it so that my app stops killing my HDD even after I cleared the IDB completely. There are many reasons Firefox sucks but this one is also funny.