the goal is not to randomly distribute to to network. One of the things that nostr got very right is not trying to optimize for non-natural edge-cases.

the goal is censorship-resistance, that means that a censored person should not become a second-class citizen. They should be able to very easily move to their a new/their own relay and users interested in that pubkey should not even be able to tell that the person was censored.

And even if you are not concerned with censorship-resistance, niche/specialized relays not being second class citizens and *just working* for users is a MASSIVE win for everybody.

Reply to this note

Please Login to reply.

Discussion

NIP-65 doesn’t even mention the word censorship. The motivation talks about centralization in large relay operators and duplication of events throughout the network.

Is it not reasonable to assume that at least part of the goal was to allow the spread of user events out to different relays?

The more we spread the events the worse gossip performs. Isn’t that a problem? It seems weird to me that our solution for censorship resistance breaks if too many people exercise the option.

That's fair, maybe that section could be re-worded. But it is an accurate, if incomplete problem statement. The current very naive use of the network is maximum duplication (looking at you, blastr). Reducing duplication by centralizing content is better in one dimension (resource allocation), but worse in another (redundancy). A good solution will balance those concerns.

The original motivation behind NIP 65 didn't grapple with the problem you're articulating (I brought this up in at least two TGFN podcasts last fall). But fiatjaf persuaded me that it basically doesn't matter, because the network will never be anywhere near fully distributed.

I think he’s probably right about that for now but even if we only partially distribute as nostr scales the connections number will still be high.

Even if we stick with just 600 relays (no growth) and say only 10% of your follows choose their relays at random instead of 100%. We assume the other 90% all select the same 1 relay. If you follow 1000 people that’s still ~170 unique connections on average without the network being very distributed.

I don't think so, if you have 1000 follows, and 900 of them are on one of 20 relays, that's 20 hubs + 100 self-hosted relays, a maximum 120 connections. In practice you're probably closer to 60-80. Which is still a lot! So proxies are useful for reducing that number, but even 100 connections isn't prohibitive.

10% choose 2 relays at random from the pool not self-host one each.

Perhaps you could get away with only one connection per user though depending on how they labeled them.