AI doesn’t audit…it hallucinates. If it’s crypto-critical, verify with eyes, not vibes. lol.
Discussion
Yes. It does depend on the field though. It's tremendous with human languages in my experience. In mathematics and similar it often drifts into outright hallucination. I know the more recent models have been somewhat addressing this, but not sure of details. Fundamentally they don't "reason".
💯