There is no consensus on what blocksize is “optimal” except for the only one that doesn’t need a fork. The balance and trend of the ability to run anode and recover the network if disaster strikes, is *extremely* sensitive. Just 50MB basically destroys the overwhelming majority of those who would or ever could run a node. It means ~30TB per decade. And that’s without considering the enormous computational cost and dealing with a horrific UTXO set. And yet is essentially meaningless for the real scaling problem. Running a node would be as bad or worse than building an Ai computer today and you wouldn’t really be able to use it for anything else. Practically no one is going to run that for fun. The node count is dismal today. We better be discussing how to make that WAY better before even mentioning the blocksize.

There is simply no answer to the scaling problem at the base layer except to, *as carefully as possible,* make incremental improvements, build and test each new functionality extensively, run it in the wild for years, fight over the next piece of the puzzle, find the pain points and stresses on the last tool we added, argue about hundreds of proposals, then eventually find rough consensus on the least risky and most obvious next single feature, then implement it - rinse and repeat indefinitely.

The more I think about the difficulty of this problem and the incredible risks of making changes, I increasingly don’t see how there is a better way to do this. If we want to do it RIGHT, I think scaling is simply going to take 20 years and we just have to deal with that. But it’s the only real path to something that survives. Trying to rush it and design the end all, be all solution on one go I think is simply the recipe for failure… and this might just be our best chance.

Reply to this note

Please Login to reply.

Discussion

What I'm saying is if your contention is that smaller bitcoin = more decentralized and sovereign for users, why not push for 1 KB? 0.25MB? 1 MB seems so arbitrary.

Ok, I'll assume what you say is true about 50MB, but why are 8MB blocks not acceptable? Even assuming zero consumer tech advances over that time 30TB is not that much...you can find 10TB for like ~$50 right now without even digging hard. What is the true bottleneck, bandwidth? Then why not base blocksize off that? A standard deviation or two below the average global internet bandwidth? (spitballing but u get the idea)

It's been a decade and a half and the blocksize has only functionally increased 2x while fees have skyrocketed under any modest demand. That's why other blockchains like Litecoin have absorbed users. BTC may be a SoV for the moment, but people will find substitutes for transacting.

I've recently seen someone describe Bitcoin as a concept that I liked. BTC is the most popular iteration of it atm. But it's like an organism evolving and speciating finding different niches (i.e. BCH, LTC, etc). Bitcoin will never fail, but any individual species might.

So, maybe you're right, it is better that BTC carry on this experimentation path as the most conservative version of Bitcoin.

it is what we have. a number was picked.

Bitcoin is IP

lightning is TCP

other layers akin to HTTP etc will arise

Bitcoin is a tech platform, like apple Google Microsoft or meta.

but with huge advantages

I just hope those higher layers are always in a self-custodial and permissionless direction or Bitcoin loses it's main value props

there are tradeoffs for sure