Replying to Avatar Colby Serpa

- Pablo is right in that it shouldn’t make a difference if the relay is malicious. nostr:npub18kzz4lkdtc5n729kvfunxuz287uvu9f64ywhjz43ra482t2y5sks0mx5sz The beauty of hashes and signatures is trust-minimized verification it provides. File chunking helps too when combined with them, to prevent delay attacks. 🔐

- Network overhead from opening and closing so many connections could become burdensome for clients, but I think it’s something we’ll eventually overcome. It’d be beautiful to bounce between such diverse sources of notes. 🗒️

- NIP-65 is a good start for encouraging clients read from the relays other users write to, rather than only reading from the relays the client is currently writing to.

I agree client implementations should be resilient against malicious relays but I don’t live in the “should” world I live in the real one and we were asked why clients haven’t implemented gossip yet.

The “it’s something we will overcome” is the only answer I’ve ever gotten about connection overhead. I don’t think it’s an issue that can be solved with faith.

Reply to this note

Please Login to reply.

Discussion

We can't address these problems if we haven't taken first steps toward making them problems. They will always be theoretical and we won't feel the pain or have concrete solutions. Development is incremental.

Completely agree.

Relay traversal is an art and we can only explore it by taking the first steps, not by sitting and theorizing what it would feel like.

I didn’t ever suggest that we shouldn’t pursue solutions…

Jack asked why clients haven’t implemented gossip and I gave my opinion on the current state.

The connection overhead issue isn’t theoretical it’s observable today with a fairly small network and total number of relays. The problem doesn’t improve with scale, it gets worse.

Exactly… people gloss over the details and jump to take sides so fast. I said it was burdensome to open and close many connections, and agree that isn’t theoretical. Although hopefully we can overcome it someday (soon).

Sure, but you are comparing that problem with the unworkable status quo of "just use these three relays and you'll be ok"

If we presuppose that that's not the end-state of nostr then we need to compare the problem of opening more connections (outbox-model) with the problem of NOT finding the events you are looking (non-outbox-model)

Missing events is a far worse UX than a potentially slightly worse bandwidth usage.

This is the compromise bitcoin made; it's not the most efficient system, it's not trying to be. It's just trying to survive.

“Missing events is a far worse UX than a slightly worse bandwidth usage.”

100%

you wouldn't download an event

Opening and closing many websocket connections rapidly is more computationally costly than normal, but you’re right in that it’d be the same amount of data/total number of notes across the #nostr network.

Amethyst misses events for me. I see them in Snort though 🥹

Yea I miss a bunch in Gossip, but there is a very real possibility that's because I am stupid.

There is a lot more middle ground than you’re suggesting. It’s not a binary choice of 3 relays or automatically connect to any relay your follows tell you to.

I couldn’t figure out how the latter would scale a year ago and when I asked naively then I was met with the same talking points. I guess we are still in the same place.

The concept is much broader than NIP-65…

It’s either:

-Read where you write

Or

-Read where the post author writes

There are many nuanced variations & paths to reaching one or the other… but those are the 2 main paradigms I see.

There may be lessons we can learn from libp2p and other P2P networks. I’m busy with GitNestr right now but afterwards I’ll start experimenting.

If anything, hardware and internet speed will eventually evolve to support it, on a long enough time horizon. :-)

nostr:note1ypm4g4vzhkyf6umdj2nklqxlvs4jyv3f55th5kzk2k8ac23633hsmm5wvv

Another aspect of this is that we have to keep in mind that given the current "publish your events widely, request events widely" duplication of read/write is far more demanding than necessary.

You read to the 10 relays you've configured in your client and you fetch the same event from 7, or 10, of them.

Clients being smarter about where to read/write yields end up downloading the same events from far less relays and less resources requirement.

Do any clients try multiple relays in serial? Seems like everyone makes requests in parallel. But relays are usually pretty fast to reply, so if you chunked your relay set and requested from 3 relays at a time rather than all 10 you could get decent results with far less resource usage.

Exactly what I’ve been thinking too… off-the-top optimization^

I think the problem is the SSL setup/teardown, not the parallelism. But I haven't run performance tests, my research comes entirely from models, and my models predict... global warming and mobile phone warming

Yeah, I just think if some requests were serialized you might not have to open as many connections.

Could be. In any case, there is certainly scope for innovation in this area. I'm not really thinking about it because my client isn't for a phone.

I’ve got a few ideas for shortening the time to establish an SSL connection. Need to finish GitNestr first then I’ll tinker with it.

Gossip client has settings for "Require approval before connecting to relay" and "Require approval before AUTHing to relay", and you can approve just once, or you can save that approval for every time. You can also go to the relay page and change your mind.

Love these settings - well thought out. I’d like to see more of this.

Gossip has the best relay management for Twitter style feeds. Running it, you can prove to yourself that it can work.

This approval doesn't seem to be working for me on master branch btw.. i see it authing, but it never asked me :)

Oh wait, sorry I didn't have the boxes checked like i thought I did.. 🤔 working now.

Answer about connection overhead: I always agreed that this was a problem, but I've always said the solution: Don't do it on clients that can't handle it. Point them at client-proxies that handle it for them. We need servers that act as clients on behalf of phones, since phones can't handle the overhead of managing 50-odd SSL connections that potentially close and open often. This is still fully decentralized, scales, doesn't require copying events to hundreds of relays where they don't need to be, etc. It still is the "gossip model" or "outbox model" using NIP-65. It's not the *only* architecture to address this, but it seems the most natural to me.

Sorry that’s lame, trusting middle-men proxies is no different than trusting mega relays. Haha love you though for trying. We’ll figure out that raw outbox model on clients one-day!

Unless you self-host the proxy, but most normies won’t do that alas

nostr:note1azpvv687c676rfvx0mfqanh4zz2ldfmq0cs9k9h3maqxnd4ltqas4yuhp8

Yeah, I was thinking of self-hosting or hosting by your Uncle. The trust relationship issue may be a non-starter in too many cases.