It's not redundancy. That's not what relays are.

Reply to this note

Please Login to reply.

Discussion

How is it not redundancy? If one connection dies, you're still connected via another. If one relay deletes your stuff, your still hosted by another. It is textbook redundancy.

Lol. That's not how it works. It's how you want it to work. But reality is very different. Events go missing all the time. Relays are just temporary storage.

Of course if you're not paying for any relay, you have no guarantee of data persistence, but I think everything else I said is correct.

Nah, that only works for the bigger ones. If you pick 3 random relays to store your posts, you are going to have a bad experience most of the time. Operating a relay is very costly.

Very constly. Sounds like Ethereum node.

Can we make it cheaper?

Sounds like it will centralize alot

Only 3 relays is not nearly enough. Thankfully Amethyst uses a lot more by default.

The only issue with those relays is they appear to be static. Once they're gone, my client stops working.

Would be better if the client found it's way into Nostr through a list of relays found via DHT.

3 relays should be all you need. Anything more is dumb and only there because everything fails all the time. Anything more than 3 the data plan use is extremely wasteful, with duplicated data being downloaded and processed everywhere. It's a huge battery drain.

Being able to cope with "everything fails all the time" is what makes the network robust and unstoppable. I detect tones of centralization in your last note.

The network is absolutely centralized in the top 10 relays. Everything else (the other 4000 relays) just fails. Even the top 10 fail in many queries. But they fail less because of the massive investment in the infra they made.

10 is a lot better than 3.

Lol. Sure. But I am not here to replace Twitter by 10 servers.

What are you here for?

At least 10M relays.

That number seems a little excessive. A distributed system should, by it's very nature, be fragmented, but not excessively so.

What advantage would 10M have that 1M would not?

Are we still talking about relaying "tweets"?

Decentralization. That's the my minimum to call nostr a sucess. The ideal number is 2B relays.

Tor is decentralized. I don't think it has 10M nodes. Bitcoin is decentralized. I don't think it has 10M nodes either.

Tor sucks. We passed bitcoin's size in database last year with just 4k daily active users. You cant even compare the two. Nostr is orders of magnitude larger than any decentralized thing that exists today.

If you really want Nostr to be decentralized, you should stop asking people to care what relays they use. That should all happen automatically. If a relay dies, a new one should be fetched automatically from an automatically-updated list of relays that can never be exhausted. Just like how nobody cares about what peers they download from when downloading a torrent.

Not really. Nostr is not torrent. And never will be. There is no dynamic dissemination of events, indexing, etc. People have to choose their servers. Or their events will be lost forever.

I don't disagree with the fact that people need to pay in order for their data to remain hosted long-term. I just think there needs to be a better way of incentivising relays to host data long term, without requiring users to manually pick and choose which relays get paid. Bitcoin users don't choose which miners are securing the network. It's a competition. Nostr users shouldn't have to care which particular relay is hosting their data at any give moment. It should likewise be a competition.

There are ways. But the cote protocol doesn't implement any of them. I even if she nips implement it, all if the others will still require you to choose. Regardless of what anyone think. This was never part of the protocol and likely will never be.

A distributed system is literally fragmented to the maximum extent.

A decentralized system lies somewhere between centralized & distributed.

FYI

Regarding the waste and inefficiency, that's a valid concern. I imagine that's a problem that could be solved though by being smart about what you ask for from relays.

No smartness can compensate missing events that are supposed to be there.

Missing data shouldn't be consuming huge amounts of your data plan though, which I though was the issue.

Filter requests are massive. And there is always some data. You just don't know how much has been deleted.