Hot take, maybe? LLMs and AI are not inherently bad. It really comes down to:
The companies that run them and how they act.
The insane impact that blind trust has on multiple financial markets.
The horrible environmental impact of unchecked, unregulated growth with minimal thought put into solutions.
Hallucination is also a bad side effect but I don't really consider that a real downside as long as you know how to use the tool (and recognize that it is a tool in the first place).