There's a lot of ground to cover to fully answer this question but here's a rough treatment of the major topics. I'm probably missing some stuff.
Increasing the block size sacrifices decentralization. It increases the cost to run a node, which (the thinking goes) will necessarily push some node runners to stop running a node and make the network as a whole more vulnerable to attacks. The bigger problem, IMO, is that it's a slippery slope. More on that later.
Bcashers thought keeping fees low to incentivize cheap payments was more important than keeping it easy to run a node. Bitcoin devs and users thought otherwise. Bcash forked off the chain and steadily increased the block size in subsequent forks. Today, bcash is more or less dead. Nobody uses it.
Another big issue is that coordinating a hard fork is very difficult. Bitcoin has an implicit guarantee to never hard fork if it can be avoided. (We'll have to hard fork within the next 100 years or so to fix an int overflow bug but there is no rush.) If we hard fork when fees get high this sets a very low bar. Next time fees go up again people will start clamoring for another block size increase, which weakens the network further by pushing more node runners out. We can see a variation of this dynamic at play on bitcoin twitter today with the folks who want to filter ordinals out of the mempool. Mempool filters are not a consensus change, so it wouldn't lead of a chain fork but it's a similar situation. If we change the bitcoin node software every time someone discovers a novel spam attack we will end up playing whack a mole forever and people will just stop using the mempool, which leads to centralization and eventually network capture.
Bitcoin must be antifragile in order to succeed in the long run. Making permanent changes in the face of temporary problems is the opposite of antifragile. Instead, we should be extremely cautious any time we introduce changes that might impact the carefully balanced incentives that make the whole system work.