The reason I started Relayable project was after I saw people telling Jack they couldnt see his notes when an apps associated relay was down for updates. Even now I see this as an issue. If one of these relays goes offline would be like AWS going down in your Internet comparison. Yeah still work but not very usable.

Reply to this note

Please Login to reply.

Discussion

I am moving to a dedicated server on an independent host for this very reason. Oracle (or any major cloud) going down is likely to take out too many relays. Looking forward to checking out nostr:npub1rhaxmxxgs5jkw4zesrptnwd5nrknh5lkaxnlzzegz9vphlgfdlqsjmej47 eventually. Geographical diversity is next on my list.

I am bringing up One Wikshire this week. You are top of the list. It is where the Asia undersea cables terminate. Montreal is already up and testing.

I'm ready! Take my money! :)

Those apps are doing it wrong.

One of the things I've noticed about Gossip model is that the clients spend a lot of time trying to connect to relays that are down. How would this work well with mobile, where connections tend to be expensive?

I've recently changed the penalty box timeouts depending on circumstance. "Forbidden" for example won't be retried for 86400 seconds. It still probably needs more refinement.

If mobile phones simply can't be full-fledged clients under the gossip model (and I don't know to what extent that is actually true), then there is a case for a client proxy that they offload to. Now we can talk about the architecture of such a thing and how it differs from an aggregating relay, and what exactly Relayable is... I think a client proxy should operate under the gossip model finding the relays to search on behalf of the actual clients, and the clients can do their simple configuration connecting to 3 or 4 of these proxy clients not needing to use the gossip model themselves.

I'm less interested in this model for the same reason I'm not terribly interested in custodial bitcoin... you can do it, but don't complain to me if you lose all your money or when you configure just one client proxy and it happens to be down.

Thanks for the insightful answer. I also think a hybrid model will be necessary for mobiles. Even without the gossip model, Nostr is very energy intensive on cellular phones. On Desktop this is much less of an issue.

There needs to be a better solution for mobiles long term though. Mobile is going to be the way 90% of people connect to Nostr.

Signature verification is another thing that could be offloaded to a trusted proxy.

> If mobile phones simply can't be full-fledged clients under the gossip model (and I don't know to what extent that is actually true)

The thought occured to me that maybe this idea that mobile phones can't handle lots of network connections came about due to firebase: google routes all apps through one channel to save battery. But this isn't because multiple connections devastate the battery... it is because multiple connections waking the phone up all the damn time when you are not using the phone devastates the battery. A client that makes many connections while you are using it doesn't seem to me that it would be a significant drain on the battery. The number of words processed is the same whether they all flow through one network connection or they flow through 100 network connections each 100th of the amount (neglecting the overhead which is probably quite significant, but probably a factor of less than 2). I could be completely wrong, but that thought just occured to me.

The problem is that TX/RX is costly (energy wise), and they receive the same events N times, where N is the number of relays they share with the poster. afaik there really isn't a way in the protocol to deduplicate server-side in any meaningful way.

The way I do it they receive the same event 2 times. My client only subscribes to person X's events on 2 of their relays (configurable, but that is the default).

But yes traffic TX/RX is costly. My desktop client has read 26 megabytes since I started it a few hours ago. Other clients I've heard of download gigabytes during that same amount of time. I think there is a lot of tuning and optimization that can be done even in my client, but also in the protocol (things like COUNT).

I think I’ll capture some data on that. You’ve made me curious.

What approach are you going to take to capture this data?

This is exactly what I implemented back in January when I was in Thailand and my data was super expensive.

Mobile connected to my relay proxy at home; the relay proxy following relay hints (I wasn’t aware of the gossip model then so just relay hints no 10002)

It works really well

Exactly, these apps need to implement the gossip model and relay hints need to be more ubiquitous; the solution is not making the data more widely available by syncing between relays or by using something like blastr.

Like I mentioned last night: this is largely not the case and we still have a lot of work to do.

Providing gossip-type behavior is the whole reason of being of NDK, because the path of least resistant for micro apps is to just connect to two or three big relays and be done with it.