Your first paragraph is why I'm asking, and why I said you surely are NOT talking about global state.
I can't decipher your 2nd paragraph, hence my question remains 😅
Your first paragraph is why I'm asking, and why I said you surely are NOT talking about global state.
I can't decipher your 2nd paragraph, hence my question remains 😅
Her point is that because it is indexed via unique hashes, if you crawl long and persistent enough in theory you could find all the things.
Atleast, i think that is her point.
I would add that this 'unlimited time' thing is cheating, and therefore makes it practically wrong. And i don't mean in terms that using infinity is always cheating, but in terms of trust decay.
I am also not making a data availability argument, but one of distance in social graph:
Data is meaningless, in order for it to be information you need context. The immediate context to events is the signature, which brings us into this whole WoT thing. Problem is that signatures run under an underlying assumption that the privatekey is indeed private and therefor the wielder of a name is consistent.
The problem is not getting data anymore, in the 'dead internet TM' context the problem almost completely shifts to getting 'real' data. Relying fundementally on both the key assumption on the one hand, and WoT on the other, distance in socialgraph but also distance in time becomes an issue in a similar way:
It is more realistic to assume that keys will eventually get compromised in their lifetime than not. Therefor if we imagine a let say 100 year timeline (which compared to 'infinity' is not all that much right?), we have generational distance to old notes and their associated keypairs. My point is that because of integrity insecurity, you get a similar type of trust removal as if those keys were distant from your social graph.
In other words: yes, on this 'infinite timeline' you can have all the data, but for the most part means you have all the noise; and its that same time component that undermines your ability to differentiate signal from noise. Time is actually undermining your efforts is my point.
This is also why i keep on hammering on the use of NIP-03 opentimestamps, which somewhat mitigates all of this
Yeah, we actually have an organic view of the system. You could crawl the entire network/graph you are on, given enough time (yes, that's cheating, but give me a minute), but you will never need to crawl that entire network because useful information tends to cluster, and nobody needs _all_ information, but rather _useful_ information. It's more a question of finding the appropriate jumping-in spot and exploring in a radius from there.
This is essentially different than a centralized database, where you are forced to ratter through all records, until you find the appropriate one, so that having lots of records destroys efficiency or even ends in a time-out.
Furthermore, we expect similar information to cluster more and more, over time, so the need to crawl the system will diminish over time, despite the total amount of information increasing.
This, essentially, why we think keeping Nostr events small/modular will make the system more efficient, as it allows for more-effective clustering.
Roger.
Agreed.
One question though, dont these central servers use fancy indexing heuristic magic things to optimize their processes, rather than bruteforcing itself through long lists of things though?
Still leaves the fact that with Nostr querying multiple relays distributes computes regardless
I was emphasizing the eventual-consistency of the various Nostr graphs to underline the point that immediate-consistency has little value within the graph, so long as you can crawl the graph. It's enough, for the data to seep.