Nostrdb uses LMDB which is fast, but it’s not a graph database.

Reply to this note

Please Login to reply.

Discussion

how much do you want to bet a custom WoT algo on lmdb beats a generic graph database. I would put money on this

in terms of perf

If you want to limit the WoT algos to no more than one or MAYBE two traversals, and if you want the nostr user base to remain minuscule, then stick with LMDB.

If you want sophisticated WoT algos and if you want the nostr user base to grow, then graph databases will outperform LMDB.

we'll see !

Are you aware of the advantages of graph databases in terms of performance for certain categories of queries?

I am aware and I can confidently say that an optimized KV store will outperform a graph DB which has much more layers to traverse.

Because WoT is mostly BFS with some additions

its hard to beat a few btree lookups hot in a compact binary cache

Graph databases are all just memory pointers everywhere. There is no "lookup". The "btree" is the graph.

Btree is the index. The nodes/relationships are stored as records. Neo4j 5 uses a more efficient method than 4, but essentially the nodes store the exact memory location of the nodes they're related to, and you would just jump your pointer to that location to read it, same with the relationship props.

That's what index-free means. You can traverse the graph without using any indices. Which is also why no other database can compete in terms of actual graph performance.

But I'm sure that the key value bros can do better 🤭

Index free adjacency is the key. For sufficiently deep traversals, native graph databases like neo4j have a theoretical advantage over non-native graph databases.

For a traversal of depth d in a graph with branching factor b (average degree), a native database’s time complexity is roughly O(b^d). But for a non-native db we have to add the index cost: O(b^d * log n) where n is the size of the index (e.g. n = the sum of nodes and edges).

A depth of 3-5 is where it becomes noticeable. WoT won’t work if we tie our hands by a limit on traversal depth.

log n isn’t as bad as you think

traversing n = 1B is 2.5x as expensive as n = 10k

now, what cost does this “index free” approach bring compared to indexes? it pushes the storage engine to add a 2nd translation layer for pointers and/or implement compactions with pointer rewriting

all while making your codebase more complicated for no reason

the act of calculating PageRank is significantly more expensive than a few DB lookups.

I think that arguing about speed and efficiency is missing the point, and why all conversations about graph data go to shit.

Use the schema that fits the domain. If your data is graph data, use a dedicated graph database.

Neo4j is still the correct tool for Nostr. Because Nostr data is graph data. It may not outperform the specific, narrow use cases you think up to dismiss it, but that doesn't matter to me. You need to reframe the way you think about data to see the power of graph representations. And everything you say tells me you can't do that.

Anything can be a graph database. Even relational databases are one form of graph. We don’t need large, unnecessarily complicated tools to solve the Nostr graph problem.

What does Neo4j bring to the table compared to standard Nostr filters that makes it so useful? That it makes complicated queries look cheap and easy?

A graph database is one of the most general types of databases (even more than RDBMS) and is by consequence one of the hardest to optimize.

It simply does not make sense for Nostr, where purpose specific databases can allow for graph like queries but much more optimized for the use case (and unlock new capabilties due to that).

That you can place Nostr events in it in a form that is natural and organic. That you can let the data form itself into the structures that emerge organically by the network that creates it. And that it gives you the right tools to find things you couldn't imagine before you started.

That is why graph data is powerful, and why it outstrips tables and other databases in my mind. There's power in it's inefficiency because you can build on it indefinitely. You set yourself up to build things you can't imagine instead of constraining yourself to what you're building today.

A product is no good if it is too expensive to run or use.

What *can you really do* with graphs that you can’t do with REQs, even if multiple of them?

We'll have to see. Like I said, you have to let complex structures emerge before you decide what to do with them.

Native graph dbs can execute *performant* path queries that are *long and complicated*.

So what we have here is a tradeoff. IFA (native graph db) avoids the O(log n) cost. Also, it scales well in the sense that query time doesn’t degrade with increasing total graph size, only with the subgraph queried. So we weigh that against the storage complexities introduced by IFA (translation layers and pointer rewrites) as you point out.

For apps that are traversal-dominant with more reads than writes, IFA wins those tradeoffs. And what we’re building is a knowledge graph curated with the assistance of your WoT. By its nature, it’s going to get bigger and more traversal-heavy as it gets more sophisticated. And the tasks that will really push its abilities are going to be read-heavy. (Imagine searching a knowledge graph to discover surprising but useful topological similarities between subgraph A and subgraph B. Also known as “making an analogy”.)

At least, that’s my current thought process. It’s worth keeping in mind that for write-heavy scenarios with short traversals, the tradeoffs may favor non-native graph databases. And there are non-native graph databases that are built on LMDB, like TuGraph and HelixDB.

But does this so called “log n” scaling even matter compared to the real world in real numbers?

A performance decrease of only 3x for a minimal part (reading the graph instead of analyzing it) at a scale of over 100000x is insanely good, compared to all those compactions that will be needed and the tremendous write workload at that scale...

A billion users doesn’t mean n = a billion. If you’re looking at how people interact with each other, you’ll need a lot of indices for a lot of events.

It means 1 billion nodes though in an index.

If I want to discover instances where Alice reacts to a kind 1 note authored by Bob, I’ll need indices for content in addition to indices for users.

Or suppose I want a chart of total zap amount in sats as a function of how far down a note is on a thread. Now suppose I only want to count zaps sent by frens (mutual follows) of the thread initiator. Or suppose I want a query even more complicated than that.

We’ll need indices for all content events, all zaps, all nostr events. Trillions.

You take all zap events and use some fast OLAP DB to crunch through this data instantly instead of wasting your time with a graph DB

This is a simple merge-join on 2 indexes that is cheap

Very specialized wot would absolutely do better on a purpose built stack.

Generic wot that is flexible and user friendly would do better on the only graph database worth using, which is neo4j.