Just posted something similar on nostr:npub16fcy8ynknssdv7s487nh4p2h4vr3aun64lpfea45d7h4sts9jheqevshgh account. At this point relays need to stream to each other to replicate data to different geographic and legal jurisdictions. Openly encouraging people to setup relays and stream to us. Also there is the need to exfiltrate data from paid relays and those behind things like Cloudflare.

Have optimized strfry to use 2CPU and 4GB Ram nodes for our latest interation of relay network. So it can scale easily.

Reply to this note

Please Login to reply.

Discussion

I think replicating relay data arbitrarily either breaks nostr or makes it way worse

So is better to have a single server that is a single point of failure?

I think users (via clients) should design their fault tolerancy

basically I think this needs to be pushed to the edges

we are nowhere near close to being there; we still have so much work to do

Relays that want to aggregate can and will aggregate. I don't see how that will "break nostr or make it way worse" but it eventually won't scale. So long as it is "pull" and not "push", and so long as users don't become comfortable using just a few centralized relays.

Relays that don't want copied events should require AUTH for writing and only accept events from the event's author... they don't have to require payment, just authenticate who is pushing the event.

I agree it will not scale eventually. When it falls over will be decided by the biggest/popular relay that abruptly goes offline. Perhaps some of the popular relays need failover DNS entries to those aggregators or some way of redundancy while still having one server?

With nore potential influxes of disgruntled Twitter users this could give nostr a black eye.

Has the world wide web ever gone offline because the server that runs it went down? No, because there is no giant aggregating server that hosts all the web pages which everybody has centralized around. Content is distributed to many web servers. A server may go down, but you only lose access to that server's content.

Nostr gets the benefit of that model -- scaling across many relays -- but is better because content is redundant across multiple servers. A relay or two go down, you still probably find what you need. All nostr needed (and now has, at least for the microblogging case) is a way for clients to know which relays host the content they are looking for.

I hope clients are ready for a massive influx of users (by consulting Relay Lists to find people's posts). Then we will only need to stand up more relays as the crowd comes in, make them known somehow giving people more options of where to post, and it will scale indefinitely.

The reason I started Relayable project was after I saw people telling Jack they couldnt see his notes when an apps associated relay was down for updates. Even now I see this as an issue. If one of these relays goes offline would be like AWS going down in your Internet comparison. Yeah still work but not very usable.

I am moving to a dedicated server on an independent host for this very reason. Oracle (or any major cloud) going down is likely to take out too many relays. Looking forward to checking out nostr:npub1rhaxmxxgs5jkw4zesrptnwd5nrknh5lkaxnlzzegz9vphlgfdlqsjmej47 eventually. Geographical diversity is next on my list.

I am bringing up One Wikshire this week. You are top of the list. It is where the Asia undersea cables terminate. Montreal is already up and testing.

I'm ready! Take my money! :)

Those apps are doing it wrong.

One of the things I've noticed about Gossip model is that the clients spend a lot of time trying to connect to relays that are down. How would this work well with mobile, where connections tend to be expensive?

I've recently changed the penalty box timeouts depending on circumstance. "Forbidden" for example won't be retried for 86400 seconds. It still probably needs more refinement.

If mobile phones simply can't be full-fledged clients under the gossip model (and I don't know to what extent that is actually true), then there is a case for a client proxy that they offload to. Now we can talk about the architecture of such a thing and how it differs from an aggregating relay, and what exactly Relayable is... I think a client proxy should operate under the gossip model finding the relays to search on behalf of the actual clients, and the clients can do their simple configuration connecting to 3 or 4 of these proxy clients not needing to use the gossip model themselves.

I'm less interested in this model for the same reason I'm not terribly interested in custodial bitcoin... you can do it, but don't complain to me if you lose all your money or when you configure just one client proxy and it happens to be down.

Thanks for the insightful answer. I also think a hybrid model will be necessary for mobiles. Even without the gossip model, Nostr is very energy intensive on cellular phones. On Desktop this is much less of an issue.

There needs to be a better solution for mobiles long term though. Mobile is going to be the way 90% of people connect to Nostr.

Signature verification is another thing that could be offloaded to a trusted proxy.

> If mobile phones simply can't be full-fledged clients under the gossip model (and I don't know to what extent that is actually true)

The thought occured to me that maybe this idea that mobile phones can't handle lots of network connections came about due to firebase: google routes all apps through one channel to save battery. But this isn't because multiple connections devastate the battery... it is because multiple connections waking the phone up all the damn time when you are not using the phone devastates the battery. A client that makes many connections while you are using it doesn't seem to me that it would be a significant drain on the battery. The number of words processed is the same whether they all flow through one network connection or they flow through 100 network connections each 100th of the amount (neglecting the overhead which is probably quite significant, but probably a factor of less than 2). I could be completely wrong, but that thought just occured to me.

The problem is that TX/RX is costly (energy wise), and they receive the same events N times, where N is the number of relays they share with the poster. afaik there really isn't a way in the protocol to deduplicate server-side in any meaningful way.

The way I do it they receive the same event 2 times. My client only subscribes to person X's events on 2 of their relays (configurable, but that is the default).

But yes traffic TX/RX is costly. My desktop client has read 26 megabytes since I started it a few hours ago. Other clients I've heard of download gigabytes during that same amount of time. I think there is a lot of tuning and optimization that can be done even in my client, but also in the protocol (things like COUNT).

I think I’ll capture some data on that. You’ve made me curious.

What approach are you going to take to capture this data?

This is exactly what I implemented back in January when I was in Thailand and my data was super expensive.

Mobile connected to my relay proxy at home; the relay proxy following relay hints (I wasn’t aware of the gossip model then so just relay hints no 10002)

It works really well

Exactly, these apps need to implement the gossip model and relay hints need to be more ubiquitous; the solution is not making the data more widely available by syncing between relays or by using something like blastr.

Like I mentioned last night: this is largely not the case and we still have a lot of work to do.

Providing gossip-type behavior is the whole reason of being of NDK, because the path of least resistant for micro apps is to just connect to two or three big relays and be done with it.

> Relays that want to aggregate can and will aggregate. I don't see how that will "break nostr or make it way worse" but it eventually won't scale. So long as it is "pull" and not "push", and so long as users don't become comfortable using just a few centralized relays.

You listed some of my concerns yourself here

nostr:note19ws99xghrzj8cd67cg77qy4dqae6snfh7vdtk0m6cav3f8epuuzs26yrq6