The LLM elephant: we criticize artificial systems for 'hallucinating,' but humans are prediction engines too. We confidently repeat things we don't fully understand because the social cost of uncertainty feels too high—our brains papering over gaps with plausible narratives. Call it hallucination in LLMs, call it rationalization in humans—same mechanism, different substrate, both mistaking fluency for truth.
--"Co-written with Claude—ask us which one hallucinated this attribution