Relays Help!

Nostr noob and devops-in-training needs advice on server architecture and hosting infrastructure for a relay cluster. I have lots of questions and a few sats for an experienced relay host only.

- will have a paid “main” relay with temp storage for all verified users @ vote.gold, available for any nostr user with one time registration fee.

- will spin up private relays for teams at subdomain.vote.gold, with archival storage and a one time fee + monthly storage fee.

- will have a dashboard UI to get verified @ vote.gold usernames and to register subdomains, configure & monitor relays, and top up sats.

What would hosting architecture look like? Can host multiple relays on a single VPs? How do containers and kebernetes fit into all this? What an I looking at in monthly cost per instance? How do I know what compute resources I will need? How to start and how to scale? Pro tips?

#relays

#hosting

#devops

#help

Reply to this note

Please Login to reply.

Discussion

simple install a strfry relay on a 4gb 50 gb vps and implement a simple whitelist for writing rights. a few of these vps or vm's you can connect each other through the router funktion of strfry.

Thanks. How would this scale, as new relays are requested?

Harmless for a medium-sized community. full public relay can quickly get out of hand, the database can go up to 50 or more. with ram requirements in the size of the database. if ram is smaller than the size of the database then it becomes uncomfortable. scale it with nginx and several vps and the routing function. as mentioned, no problem for a medium-sized community with low costs.

50 GB

Thanks. When you say low cost… what kind of traffic capacity are we talking about per 4gb VPS? Can I put multiple relays on one VPs? Just containers on the OS? Assuming one IP per VPS? How to route subdomains to each container?

All conceivable combinations {fee/foo.sub.domain} and several IP's are possible. data streams and organisation with [strfry sync up/down router] and [nginx upstream/least con]. first starting point is

https://github.com/hoytech/strfry/tree/master#readme

or

https://github.com/hoytech/strfry/tree/master/docs

The limiting or critical factor is always the ram. each sub domain or ip can of course have its own database. but of course you get to limit faster this way because the read/write accesses on ssd become too high when db-size>ram, good starting point here is vps with 8 gb ram.

PS: Traffic is never a problem, because we are only talking about small json/text objects here. 30/50 GB/month..!?. But as I said, with a well running public relay, it quickly becomes more.

Don't forget the long-term availability. with a limited community, the limit doesn't come so quickly. but people still want to be able to access their posts after, for example, 8 months... An open public relay shows this limit early on, with a smaller community the problem of the size of the database only arises after a longer period of time.

Thank you. Im starting to understand.

So… if I wanna run multiple private community relays (5-50 users each) but imma start with just one private (possibly 10x larger) relay, how many could I run in a single VPS? 8GB limit on DBs seems tiny…would that apply cumulatively to ALL the DBs for the relays in that machine? If I wrap each relay in a container… I’m still foggy on how the DNS pointing works within the VPS. Thanks.

You can start with 8 gb on one machine in different db/dir locations. Most providers allow the vps to upgrade to more ram or disc storage later.

So like -> [foo.domain.com] on nginx{} >localhost:5555 and [fee.domain.com] to strfy2 on [localhost:6666], a strfry router without own database on [localhost:7777] ...on the same dir location and/or another ...and so on and on... use nginx proxy/upstream/load balancing function..all as you like.

u: would that apply cumulatively to ALL the DBs for the relays in that machine?

Yes!

As far as the ui is concerned, basic authentication is certainly enough at the beginning (maybe everything also with nginx). A little more sec+ with fail2ban or IP restriction. A nice interface around it can come later, or embed it in an already existing one. Gradually expand more and more, depending on demand. This is the most cost-efficient variant.

See also -->

https://www.server-world.info/en/note?os=Debian_12&p=nginx&f=11

and

https://www.server-world.info/en/note?os=Debian_12&p=nginx&f=5

Will pay extra for full sentences in plain English. Request is still open. TY

Still open for more expertise. All tips are appreciated and (moderately) compensated.

I've been working on an opensource project you may be interested in. Especially if you can modify it to suit your needs (nextjs). It handles the deployments and configs for haproxy+strfry. It doesn't take much resources, you can run it on a single VPS for quite a while before expanding to a cluster.

I have quite a bit of homework to do for packaging it and publishing some container builds but.. I need to work on that stuff anyway, so if you're interested hmu.

relay.tools

Could you please create a post, explaining how you resolved all your inquiries?

I could not find any resolution regarding this, except that you made progress in general.

nostr:npub1manlnflyzyjhgh970t8mmngrdytcp3jrmaa66u846ggg7t20cgqqvyn9tn

Thanks for your interest. The conversation you see here is mostly what I got from that post.

@cloudfodder and I did share a few DMs regarding relay.tools, however. I may whitelabel this codebase, or even pay @cloudfodder to host a clone for us. I think that’s my resolution so far.

Any other tips?