#orly #devstr #progressreport

so, i'm working on a sync/replication feature for ORLY now. because it's got the HTTP endpoint, the complexity of pushing events to other replicas is really simple. i'm not going to fuss over making a complicated sync system other than adding a last-synced date.

well, that would work, it would be default zero so i just add some handling to the http events API method that if it sees a designated peer, it just hammers the whole database at the replica starting at the "since" date of zero, so, it would automatically sync the entire thing at the beginning. simple.

with this together, and a DNS configuration that points at all three of the reverse proxy front ends on each VPS, it would create two or more highly synchronised, redundant relays, that would split up the work of serving data, and whenever new events come into any of them, all replicas have it within fractions of a second.

my goal here is reliability. i don't think anyone else building small scale relays have built a redundancy scheme like this, and it's really not expensive to run a few. like, one DNS registration, 3 ~$10/month VPS, each one with a reverse proxy set up identically except pointing at the local replica, and each relay replica configured with the addresses and pubkeys of each other and their own nsec for auth.

it's a proof of concept, and will be enough for my own second degree follow graph to run for 6 months before i need to think about upgrading storage, but the idea is that this becomes a viable method to create reliability for a business deployment of relays, it will scale up quite a bit to the point where it could be like 5x $100/month hetzners with 100+gb of storage and be highly available.

a highly available relay cluster would be an essential part of any serious business deployment. auth would be needed, as it is always needed for any secure web infra deployment (except TEA lol) and with this in place i'm pretty sure it would scale up to be capable of literally serving half of nostr's current load on 5 fast, distributed servers. i know that some of the big relay providers on nostr have even more grunty systems to do this with but i think that they are overthinking it. ORLY is simple and extremely fast, and should be fine as it is without any consideration for maintaining control over resource usage for maybe up to 50k userbase. by the time something like that exists, it will be extended with more management tooling to maintain second level storage and archival relays.

i already wrote all the second level stuff previously and tested it on a shitcoin back end, and wrote a garbage collector that could prune out several gigabytes from an 18gb storage, and the database engine wasn't even as optimized as my current version is, with about 10 seconds, concurrently while handling requests.

Reply to this note

Please Login to reply.

Discussion

No replies yet.