Why can't we just open one subscription to two or three relays of each followed person in order to build the default feed?

If that was possible that would make implementing "outbox" even easier than implementing a client that just uses a static set of relays. Caching notes from is easy, it's easy to continue fetching from where your cache ends. Relay selection is easy, no need to build complex filters aggregating multiple requests and do fancy relay selection based on whom you're following matches whom. Everything is just one loop.

Nostr is easy again, developers are happy. Even the dumbest web developer can do censorship-resistant Nostr clients for everything.

Reply to this note

Please Login to reply.

Discussion

Automatic client setup with minimal user guidance could work, as many users struggle to grasp relay roles in feed building due to legacy social networks. This has hindered my family and friends from using Nostr; they get overwhelmed when it comes to checking relays.

What is the problem with connecting to "more relays"? Why is that a concern?

Too much mobile data for redundant information

Not if you're careful about deduplicating requests

Sounds arbitrary. And probably a security concern over a webpage causing your computer to DoS a million sites? I don't see why it would eat the battery to have the same amount of data sent over 400 connections rather than 200, and, well, battery is not a concern for desktop computers, while phone apps don't run on browsers (I imagine even the webview-wrapped apps may not be subject to that limit). Not ideal, but I guess you can add a relay picking algorithm that favors hubs over non-hubs when running in a browser.

Now assuming you have that, is there a reason why you wouldn't open 100 subscriptions on the same relay with one key each instead of a single subscription with 100 pubkeys?

If you can build a performant client that connects to 400+ relays when trying to load your following feed I’ll build a nostr church in your name.

I don't want a church, I want an explanation on why you think that is so absurd. The amount of data you'll download is the same and will depend on how many people you follow and what you have cached locally. Each new TCP connection is cheap. What am I missing?

The point you’re missing is that these are pessimists who spend their energy trying to prove that nothing can be done.

in case of cpu intensiveness, tcp connections are not that cheap i believe.

It’s not about data. There are browser, OS (open files), and mobile per-app limitations that prevent opening an unlimited amount of websockets. Even in a perfect nostr world with only reliable high performance relays you will start to bump in to these limits in most places as you exceed 200-300.

As we do not live in this perfect world, I suspect the practical challenges of managing hundreds of different connection and response times with varying reliability in a performant way with an acceptable UX will not be trivial either.

I just don't see why you would. Why wpuldn't you just use lisp and program everything using cons cells?

It is also a concern to relay bandwith I would argue. When every user uses less relays, the same number of relays can serve more users.

What is better for bandwidth?

- you connect to 10 relays and download 1MB from each

- you connect to 1 relay and download 10MB from it

second one. i avoid wasting a bunch of bites to do the handshake and protocol upgrade. 😎

Defnitly the second is better for downloading. But wouldn't in an ideal world every note use the minimum amount of space? Therefore be saved in one relay. And every follower downloads it from there.

Then maby instead of one I choose two or three for redundancy. And the download works same like torrenting, as you explained to lower download bandwith.

That world is the same world we're living in now.

connect to 5 relays and download 2MB from each

Correct answer.

Lets image a world, where the main feed would not be sorted by creation date, but by the time the client received the note. If the client doesn't have to list the notes by creation date, the client gains the ability to schedule retrieval of new notes, basically setting up short-term subscriptions for X relays, then e.g. 1 minute later for Y relays, then for Z and so on. This could be a tactic to avoid having 100ths of parallel open sockets.

What would be the tradeoff from a UX perspective? You loose the ability to know where to find the most recently created notes, but does that actually matter? You gonna miss out on most notes anyways, unless you are a 24/7 doom scroller 😁.

The other tradeoff coming to mind would be if you wanna read the replies on a note, you have to wait until the client went through all relays... Unless we find a way to somehow tell the client where to look for the replies of a specific note 🤔.

nostr:nevent1qqsqqqq66w38vs2vuupqf39f3f035eny3rrfkh3sqt5q83ctsnlfg7gpzamhxue69uhhyetvv9ujuurjd9kkzmpwdejhgtczyqalp33lewf5vdq847t6te0wvnags0gs0mu72kz8938tn24wlfze6qcyqqqqqqg9fwcyn