Global Feed Post Login
Replying to Avatar Silberengel

Yeah, we actually have an organic view of the system. You could crawl the entire network/graph you are on, given enough time (yes, that's cheating, but give me a minute), but you will never need to crawl that entire network because useful information tends to cluster, and nobody needs _all_ information, but rather _useful_ information. It's more a question of finding the appropriate jumping-in spot and exploring in a radius from there.

This is essentially different than a centralized database, where you are forced to ratter through all records, until you find the appropriate one, so that having lots of records destroys efficiency or even ends in a time-out.

Furthermore, we expect similar information to cluster more and more, over time, so the need to crawl the system will diminish over time, despite the total amount of information increasing.

Avatar
Constant 11mo ago

Roger.

Agreed.

One question though, dont these central servers use fancy indexing heuristic magic things to optimize their processes, rather than bruteforcing itself through long lists of things though?

Still leaves the fact that with Nostr querying multiple relays distributes computes regardless

Reply to this note

Please Login to reply.

Discussion

No replies yet.