wss://nostr-pub.wellorder.net is pumping out >100mbits/sec of nostr traffic right now.

Reply to this note

Please Login to reply.

Discussion

Wow

... remembering this milestone at Twitter?

My relay is doing 40Mb/s serving about 400 clients. Something's not efficient here yet.

Yes, clients (astral) are prone to requesting a ton of events, often multiple times.  I think they could be much more efficient.

I want accounting! I'm still trying to hire people to help implement my vision of the same in what I think now is the best relay implementation (sorry #[1]): nostream.

#[2] how is the friend-of-a-friend thing going? I might have somebody to work on it. 5 freelancers said no. The 6th said he liked nostr.

After that, I want to require authentication for all connected clients and measure:

* how many requests

* how many bytes sent

* how many bytes received

* how many ms to EOSE consumed (or if DB tells me the complexity of the query ...)

* how long the socket was open,

* how many filters kept in memory watching

* ...

Anti-spam measures will be introduced with the premium tier accounts, their follows, their follows follows, etc.

If somebody wants to copy the full DB twice every minute that's fine with me but then he should pay for it. Currently that would be $5/month? But mainly this is to collect data and put the tools in place to design the right incentives for a scaling infrastructure.

No worries; nostream is awesome!

... and so is nostr-rs-relay. All the implementations have a lot of potential for improvement. If yours fixed the excessive CPU load let me know.

It's getting better.  0.7.16 has significant performance increases, some common astral queries are 7x faster from my benchmarks.

🤙

Does nostream (typescript) have a lower CPU load than nostr-rs-relay? That's surprising

Love your vision

sorry about this. my next todo is use EOSE to be more efficient. need to upgrade to latest nostr-tools.

That's incredible.

Unfortunately unsustainable financially.  Will likely have to put some bandwidth caps in place to keep it under control.

😎 how many active users? do you fulfill limit-less subscriptions fully (i.e. all of history?)

I’ve been thinking relays should be able to redirect clients to download older (and compressed) event data from “firehose” subscriptions (i.e. [.., {}]) from a different url/endpoint — which could then be cdn cached etc.

I have a hard cap at 900, which I bump against.  And yes, you can get the entire event stream with a subscription, there are no limits other than what the client requests.

Do you see a lot of limit-less firehose queries? I’ve been thinking about auto-limiting users/clients that request the same big query repeatedly.

Lots of queries for 5000 events (with limit), not too many that return 10k+.  But lots of repetitive ones that just eat bandwidth.  I have some simple protections, but could use more.

Consider implementing this, please! I'd love to not limit "tier 1" users at the expense of limiting random accounts when server hits its limits.

#[5]

Agree, it's not. I suspect that we'll have to start streaming sats to our relay operators. #[1] and I have been discussing this for NostrPlebs.com, trying to work out a viable model.

And I think this is without any Damus connections…