That's fair, maybe that section could be re-worded. But it is an accurate, if incomplete problem statement. The current very naive use of the network is maximum duplication (looking at you, blastr). Reducing duplication by centralizing content is better in one dimension (resource allocation), but worse in another (redundancy). A good solution will balance those concerns.
The original motivation behind NIP 65 didn't grapple with the problem you're articulating (I brought this up in at least two TGFN podcasts last fall). But fiatjaf persuaded me that it basically doesn't matter, because the network will never be anywhere near fully distributed.
I think he’s probably right about that for now but even if we only partially distribute as nostr scales the connections number will still be high.
Even if we stick with just 600 relays (no growth) and say only 10% of your follows choose their relays at random instead of 100%. We assume the other 90% all select the same 1 relay. If you follow 1000 people that’s still ~170 unique connections on average without the network being very distributed.
I don't think so, if you have 1000 follows, and 900 of them are on one of 20 relays, that's 20 hubs + 100 self-hosted relays, a maximum 120 connections. In practice you're probably closer to 60-80. Which is still a lot! So proxies are useful for reducing that number, but even 100 connections isn't prohibitive.
10% choose 2 relays at random from the pool not self-host one each.
Perhaps you could get away with only one connection per user though depending on how they labeled them.
Thread collapsed
Thread collapsed
Thread collapsed