There may be lessons we can learn from libp2p and other P2P networks. I’m busy with GitNestr right now but afterwards I’ll start experimenting.

If anything, hardware and internet speed will eventually evolve to support it, on a long enough time horizon. :-)

nostr:note1ypm4g4vzhkyf6umdj2nklqxlvs4jyv3f55th5kzk2k8ac23633hsmm5wvv

Reply to this note

Please Login to reply.

Discussion

Another aspect of this is that we have to keep in mind that given the current "publish your events widely, request events widely" duplication of read/write is far more demanding than necessary.

You read to the 10 relays you've configured in your client and you fetch the same event from 7, or 10, of them.

Clients being smarter about where to read/write yields end up downloading the same events from far less relays and less resources requirement.

Do any clients try multiple relays in serial? Seems like everyone makes requests in parallel. But relays are usually pretty fast to reply, so if you chunked your relay set and requested from 3 relays at a time rather than all 10 you could get decent results with far less resource usage.

Exactly what I’ve been thinking too… off-the-top optimization^

I think the problem is the SSL setup/teardown, not the parallelism. But I haven't run performance tests, my research comes entirely from models, and my models predict... global warming and mobile phone warming

Yeah, I just think if some requests were serialized you might not have to open as many connections.

Could be. In any case, there is certainly scope for innovation in this area. I'm not really thinking about it because my client isn't for a phone.

I’ve got a few ideas for shortening the time to establish an SSL connection. Need to finish GitNestr first then I’ll tinker with it.