i like to make a distinction about the nature of evil to include stupidity and retardation as minor forms of evil. the AIs are retarded. as such, their judgement can be far less competent than even an average human.
there has been some hay made about the subject of AI being "summoning Leviathan" as Musk once said. no, this is a minor demon all the way down at the bottom of the scale alongside the demons of stupidity and its sources, drugs, disease and deliberately crafted bad logic that average to low intelligence people can't untangle, and causes them to unconsciously behave in ways that are effectively evil.
part of the problem is that the memory/encoding system that our brains use is capable of storing a shitload more data than even a trillion+ parameter LLM could possibly store, in less than some dozens or more terabytes. firstly, we have already got billions of brain cells, and their signals and modulations are analog, and the encoding is holographic, meaning that changing the carrier wave changes the "storage area" we are using. what LLMs use is a type of hash function that approximates the paths that a query result generation will produce, and it requires a random seed to start off the process. entropy is wonderful for blocking unwanted access but it's a foundational bad start for something that you want to have human-like intelligence.
LLMs are like teletype as to an 8k, 240hz OLED monitor is to a cow brain. pixellated, blurry, and unclear, and easy to produce confusing garbage.
IMO real AI would require a different kind of hardware, one that uses analog signals, varying voltages, probably a bit like currrent flash storage systems except instead of trying to flatten it to a binary system you exploit the analog nature of it, and put gates on it that connect to neighbours, then you would get something more like a real brain.
the 37% hallucination rate of LLMs is literally caused by rounding errors and the insufficient precision of even 256 bits. the genius of brains and nervous systems is truly epic, and i personally just question the benefit of it. only people who WANT slaves think it's great. the rest of us, well, i'm happy to plod along with my grey matter but sometimes these pixellated, blurry and confusing synthetic brains do actually help me work faster. but i think that working with other good and competent people to fill in the gaps in my skill instead of using LLM for it would be better.
it's notable that there is a particular mindset and stereotypical person who is obsessed with machine learning and LLMs and all that junk. at base it's misanthropic. and 100% atheistic.