the whole collecting IDs and comparing before downloading events and then just downloading what is missing, that's what negentropy does, that's why i thjnk its neat having it built into the protocol..

as for uptime and redundancy, this always comes in at least double the cost. obviously it will take a super long time to compact a 1tb db. possibly on the order of days.. but you can run a replica, and still do a zero downtime failover once it is complete as long as you have enough disk space.

ive been spec'ing out some server tiers that could handle it, while also keeping cost in mind. i think having as low a server cost as possible is really important for nostr businesses.

i also like that clients have the distributed mindset here it should help with uptime by decreasing the odds of both relays experiencing unexpected downtime.

Reply to this note

Please Login to reply.

Discussion

No replies yet.