This is actually the best possible outcome over the long term for self hosted and open source Ai, imo.
Hardware is going to get orders of magnitude more capable & efficient (we already have new chips on the way). The software is going to get orders of magnitude more capable & efficient (we already have mojo on the way). Which means performance and ability to run ChatGPT level LLMs on decent consumer hardware WILL be possible after another generation of both software and hardware.
This all means, in my estimation, that open source wins specifically because proprietary models will neuter themselves by being “inoffensive” and “woke.” The thing they forget is that these things are contradictory to reality. And at the end of the day reality is going to come back and punch you in the face if your systems try to make people believe in fairy tales or fear their own f*ckng shadows.