when I finally benchmark #nostrdb and it performs 1 million queries on a database of 1 million events in under a second

Reply to this note

Please Login to reply.

Discussion

lmdb is bonkers

npub1tt4j2zeswksjh5z7zmy283qd4yd920afy9j28xg45wsxzhl9rpjqrex9ud

I’m sure he’ll be like “that seems slow”. I haven’t optimized it yet!

What does this even mean?

The database that stores data for his application is efficient

More details please!

Sure! If you want to play around with databases, I would recommend looking into SQL. Think of it like excel sheets but much more efficient. If all of the data for an application like this has to be stored in an excel spreadsheet, it would be very slow.

Programmers have optimized databases to store large amounts of information so that it can be accessed and changed very quickly.

Note fetching from relay becomes much faster (I think).

That’s amazing

What’s the trade offs?

?

I mean if there are no trade offs. That’s amazing. Rarely there are things that are fast and good.

It only works with nostr data?

Very high memory/RAM requirements needed to support an extremely large amount of notes. It might struggle at higher scales unless the relay setup has a ton of RAM.

It’s a memory-mapped database, so it’s very fast since it can pull so much more from memory instead of the harddrive — but reliance on RAM for that speed requires buying a lot of RAM, which is costly compared to SSDs… that’s the only notable trade-off I’ve seen.

There is a non-zero chance your phone creates a localized singularity in your pocket and then blinks out of existence.

🫨

🫡😳

crazy🤯

Nicee

This is good for Nostr. 🤙

1 microsecond seek time.

FriendlyElec NanoPi R6s (RockChip RK35588)

root@FriendlyWrt /u/o/nostrdb (master)# ./bench 1000000

benching parser 1000000 times

ns/run 217153

ms/run 0.217153

ns 217153545895

ms 217153

I did a fresh pull/build beforehand. (git pull, make)

there are 3 benches. That one doesn’t query anything.

I’m using bench-ingest-many the first run will take around 15 seconds to verify and import 1 million entries into the db.

The second run should try to import but skip duplicates, so it does about 1 million queries (plus some extra stuff like parse ids).

On my old laptop it takes about 1 second. But on my m2 querying is actually slower by 3 seconds which was surprising. Still looking into why 🤔

😳😏

👀

Just imagine not earning satoshis while playing solitaire!

Follow us and check our latest habla post to learn more:

https://habla.news/a/naddr1qqxnzd3exyurgvf3xsurxv3jqgsfpr28k6zr6ymqsrr3k6d9fe76gjufa7q6cjfrmkr4jqna52ln3tgrqsqqqa28m97f5n

💫👏🏻👑