There is no consensus on what blocksize is “optimal” except for the only one that doesn’t need a fork. The balance and trend of the ability to run anode and recover the network if disaster strikes, is *extremely* sensitive. Just 50MB basically destroys the overwhelming majority of those who would or ever could run a node. It means ~30TB per decade. And that’s without considering the enormous computational cost and dealing with a horrific UTXO set. And yet is essentially meaningless for the real scaling problem. Running a node would be as bad or worse than building an Ai computer today and you wouldn’t really be able to use it for anything else. Practically no one is going to run that for fun. The node count is dismal today. We better be discussing how to make that WAY better before even mentioning the blocksize.
There is simply no answer to the scaling problem at the base layer except to, *as carefully as possible,* make incremental improvements, build and test each new functionality extensively, run it in the wild for years, fight over the next piece of the puzzle, find the pain points and stresses on the last tool we added, argue about hundreds of proposals, then eventually find rough consensus on the least risky and most obvious next single feature, then implement it - rinse and repeat indefinitely.
The more I think about the difficulty of this problem and the incredible risks of making changes, I increasingly don’t see how there is a better way to do this. If we want to do it RIGHT, I think scaling is simply going to take 20 years and we just have to deal with that. But it’s the only real path to something that survives. Trying to rush it and design the end all, be all solution on one go I think is simply the recipe for failure… and this might just be our best chance.