“Dear customers,

from Saturday, July 6th to Sunday,

July 7th we will be optimizing our online services for you. From approximately 2:00 a.m. on Saturday until approximately 5:00 p.m. on Sunday you will not be able to log in or carry out any transactions on the web, in the app or via HBCI. After that our systems will be available to you again as usual.”

👇🏻

In traditional banking, your funds reside on a centralized ledger, which can sometimes become inaccessible due to various reasons. However, it becomes even more concerning when a government controls a single, centralized Central Bank Digital Currency (#CBDC) ledger—a honey-pot of unprecedented amounts of citizens' personal data. Governments evolve over time, and this technology could potentially be misused by totalitarian regimes in an unprecedented way especially in a cashless society. In contrast, #Bitcoin offers reliability. On the Bitcoin network, 8 billion people can transact with digital cash 24/7/365 without needing permission from any central authority. Take a moment to consider this before forming your own opinion.

#StudyBitcoin 🧡

Reply to this note

Please Login to reply.

Discussion

I cannot understand why we still have systems running that need a maintenance shutdown? What’s wrong with those developers 😑

It's not a matter of good or bad developers. A full upgrade requires to stop what's running. It takes one day probably because there's a lpt tp do in the meanwhile, like taking snapshots and backups of the system, upgrading and then testing the upgraded machine back again.

As long as you have servers in place, you'll need shutdowns. "Serverless" is bullshit, it's simply someone else's server.

I agree that bitcoin solves that specific thing, but custodial services on top of bitcoin that still use servers and databased still have to do shutdown maintenance. Nostr Relays run on servers...upgrading the posgres db will need to stop it, otherwise you could face some corruption issues.

C'mon

It's got nothing to do with developers and everything to do with the people administering the systems.

There is little to no excuse for designing a system in which you cannot spin up the upgraded version in parallel, make sure everything is working, and cut to it.

This is known as a blue-green deployment.

A full upgrade, other than a kernel level upgrade should never require a full shutdown of more than a few minutes. And if proper redundancy is in place, that would be a series of rolling restarts, in which case the end user would never even notice.

Neither snapshots nor backups should cause a full system outage. One server at a time, maybe. But end users should never feel the effect of that.

It doesn't matter whether the service is legacy or not, anyone taking people's money for a service should have implemented proper failover and redundancy. We live in a time where there are no valid excuses for this behavior.

Fixing this has nothing to do with any freedom tech, only integrity of the companies and individuals running the individual systems.

No, an updated doesn’t require a stop at any time if done correctly!

All you have to do ist to administrate your system properly and write code with a well defined architecture