You're not wrong, but the things we've seen done with the leaked Alpaca weights alone are very impressive.
And as LLaMA has shown, advances in efficiency are rapid and exponential. Before the Alpaca weights leaked, no one thought they'd be running an LLM on a phone or a Pi in 2023 yet here we are.
Same applies to the training - obviously we're no where near the level that allows it to be done cheaply on consumer hardware, but it's in the interests of all involved (including OpenAI, Meta, Google, etc.) to make those processes as efficient as possible too.
I think there will be a "slowly then all at once" moment there too, like Alpaca did for running weights locally.
In the meantime, even using what we have right now, those Alpaca weights are insanely powerful and run on regular consumer hardware. Which is massive. It also means it's very easy to train a model without intentionally programmed bias (obviously you'll still have bias inherited from the training data) and without the annoying morality filter "as an AI language model..." bollocks.
I asked LLaMA on my phone how to steal a car to test it out and it gave me a list of ideas.