#orly #devstr #progressreport
so, i got it done... again the damn configurations, but also in this case i forgot to put in a label in an outer for loop so it actually worked
the replication path is now this:
try the replicas one by one
if success, done
if fail, try another
all of the replicas do the same thing, in addition to not sending to relays that have their key listed in the X-Pubkeys header and voila. 3 replicas, 3 messages, 100% propagation.
if for whatever reason one of the replicas can't connect to one of them but the others can, it will be gotten to.
this will mean that replication can scale at O(1) so message delay of propagation is linear to the number of replicas. which is much better than the previous uncivilised format.
so now ORLY can set up as a cluster or replicator, either option is possible, and two or more relays can now automatically push their new events to the other peers and instantly any subs on the other peers will also get their events. if any of the relays ded, then the message won't get to them but no user would be on them either.
in theory, the main reason for ded replica would be connectivity. but of course it could be bug, but i'll never mind that for now. still more tasks to complete on this one, but mainly just the reverse proxy and the DNS configuration and then writing a configuration system that generates the necessary configurations for a cluster deployment and voila. the world will have high availability ORLY relays that behave almost like a single big relay on a faster connection.
this will be great for censorship resistance since it would be easy for multiple individuals across jurisdictions to use the config tool to set up their own relay identity and then share their address and key to the other members and no single person can be a target for a total takedown.
and yes, it would work as equally with relays at different addresses as it would for a round robin DNS, the difference being that with different geographical locations, users could be selective for the local replica but users in other locations can use their local one... so, optimising local latency while enabling propagation at not too much slower speed.
this will be very useful for business deployments.