nostrdb is going to support a custom WoT table which is graphy i guess
Discussion
It would need to implement index free adjacency to be a graph db. Meaning an edge can be traversed from one node to the next directly in memory without referring to an index.
i don't really need a full blown graph db for wot though. just a few indices (followers of A, who A follows) for all A. plus a few other things
Follows are a weak trust signal, but it's better than nothing.
right, there would be more to the algo obviously. but just those two indices would already be really useful for lots of things. the full algo would be personalized pagerank + custom metrics (zaps, reactions, etc)
Have you thought about how to make those metrics available to other nostr clients? Iām thinking specifically of nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5zās Trusted Assertions NIP.
not really no
Are you maintaining a separate relational db so you can look up Aliceās followers without cycling through every kind 3 note in your db?
no that would be retarded. i would process kind 3s as they come in and update the who follows who index
SurrealDB uses GraphRAG, which Iām not familiar with.
Suppose I wanted a list of all pubkeys 3 hops from Alice by follows. That would be a nightmare to implement in a relational database. In neo4j in my hands, itās very performant.
Even something more simple, like give me a list of all of Aliceās followers, takes a long time if all you have is a bunch of events in strfry. But with neo4j you have it in a snap.
Iāll be pleasantly surprised if GraphRAG can handle either of those queries performantly.
Nostrdb uses LMDB which is fast, but itās not a graph database.
how much do you want to bet a custom WoT algo on lmdb beats a generic graph database. I would put money on this
in terms of perf
If you want to limit the WoT algos to no more than one or MAYBE two traversals, and if you want the nostr user base to remain minuscule, then stick with LMDB.
If you want sophisticated WoT algos and if you want the nostr user base to grow, then graph databases will outperform LMDB.
we'll see !
Are you aware of the advantages of graph databases in terms of performance for certain categories of queries?
I am aware and I can confidently say that an optimized KV store will outperform a graph DB which has much more layers to traverse.
Because WoT is mostly BFS with some additions
its hard to beat a few btree lookups hot in a compact binary cache
Graph databases are all just memory pointers everywhere. There is no "lookup". The "btree" is the graph.
Btree is the index. The nodes/relationships are stored as records. Neo4j 5 uses a more efficient method than 4, but essentially the nodes store the exact memory location of the nodes they're related to, and you would just jump your pointer to that location to read it, same with the relationship props.
That's what index-free means. You can traverse the graph without using any indices. Which is also why no other database can compete in terms of actual graph performance.
But I'm sure that the key value bros can do better š¤
Index free adjacency is the key. For sufficiently deep traversals, native graph databases like neo4j have a theoretical advantage over non-native graph databases.
For a traversal of depth d in a graph with branching factor b (average degree), a native databaseās time complexity is roughly O(b^d). But for a non-native db we have to add the index cost: O(b^d * log n) where n is the size of the index (e.g. n = the sum of nodes and edges).
A depth of 3-5 is where it becomes noticeable. WoT wonāt work if we tie our hands by a limit on traversal depth.
log n isnāt as bad as you think
traversing n = 1B is 2.5x as expensive as n = 10k
now, what cost does this āindex freeā approach bring compared to indexes? it pushes the storage engine to add a 2nd translation layer for pointers and/or implement compactions with pointer rewriting
all while making your codebase more complicated for no reason
the act of calculating PageRank is significantly more expensive than a few DB lookups.
I think that arguing about speed and efficiency is missing the point, and why all conversations about graph data go to shit.
Use the schema that fits the domain. If your data is graph data, use a dedicated graph database.
Neo4j is still the correct tool for Nostr. Because Nostr data is graph data. It may not outperform the specific, narrow use cases you think up to dismiss it, but that doesn't matter to me. You need to reframe the way you think about data to see the power of graph representations. And everything you say tells me you can't do that.
Anything can be a graph database. Even relational databases are one form of graph. We donāt need large, unnecessarily complicated tools to solve the Nostr graph problem.
What does Neo4j bring to the table compared to standard Nostr filters that makes it so useful? That it makes complicated queries look cheap and easy?
A graph database is one of the most general types of databases (even more than RDBMS) and is by consequence one of the hardest to optimize.
It simply does not make sense for Nostr, where purpose specific databases can allow for graph like queries but much more optimized for the use case (and unlock new capabilties due to that).
That you can place Nostr events in it in a form that is natural and organic. That you can let the data form itself into the structures that emerge organically by the network that creates it. And that it gives you the right tools to find things you couldn't imagine before you started.
That is why graph data is powerful, and why it outstrips tables and other databases in my mind. There's power in it's inefficiency because you can build on it indefinitely. You set yourself up to build things you can't imagine instead of constraining yourself to what you're building today.
A product is no good if it is too expensive to run or use.
What *can you really do* with graphs that you canāt do with REQs, even if multiple of them?
So what we have here is a tradeoff. IFA (native graph db) avoids the O(log n) cost. Also, it scales well in the sense that query time doesnāt degrade with increasing total graph size, only with the subgraph queried. So we weigh that against the storage complexities introduced by IFA (translation layers and pointer rewrites) as you point out.
For apps that are traversal-dominant with more reads than writes, IFA wins those tradeoffs. And what weāre building is a knowledge graph curated with the assistance of your WoT. By its nature, itās going to get bigger and more traversal-heavy as it gets more sophisticated. And the tasks that will really push its abilities are going to be read-heavy. (Imagine searching a knowledge graph to discover surprising but useful topological similarities between subgraph A and subgraph B. Also known as āmaking an analogyā.)
At least, thatās my current thought process. Itās worth keeping in mind that for write-heavy scenarios with short traversals, the tradeoffs may favor non-native graph databases. And there are non-native graph databases that are built on LMDB, like TuGraph and HelixDB.
But does this so called ālog nā scaling even matter compared to the real world in real numbers?
A performance decrease of only 3x for a minimal part (reading the graph instead of analyzing it) at a scale of over 100000x is insanely good, compared to all those compactions that will be needed and the tremendous write workload at that scale...
A billion users doesnāt mean n = a billion. If youāre looking at how people interact with each other, youāll need a lot of indices for a lot of events.
It means 1 billion nodes though in an index.
If I want to discover instances where Alice reacts to a kind 1 note authored by Bob, Iāll need indices for content in addition to indices for users.
Or suppose I want a chart of total zap amount in sats as a function of how far down a note is on a thread. Now suppose I only want to count zaps sent by frens (mutual follows) of the thread initiator. Or suppose I want a query even more complicated than that.
Weāll need indices for all content events, all zaps, all nostr events. Trillions.
You take all zap events and use some fast OLAP DB to crunch through this data instantly instead of wasting your time with a graph DB
This is a simple merge-join on 2 indexes that is cheap
Very specialized wot would absolutely do better on a purpose built stack.
Generic wot that is flexible and user friendly would do better on the only graph database worth using, which is neo4j.