Yeah he may be right.
I think some humans lie more than LLMs and LLMs that are trained on those lies is a more interesting research compared to blaming LLMs for intentionally generating lies. It makes a fun blog post but is futile when the underlying datasets are not well curated. Like a library full of bad books but you blame the librarian.
If intentionality was there yes LLMs could manipulate more and maybe anthropic is doing other kind of research too, more than AI which is simply instruction following for humans. If they are doing an autonomous AI which dreams and acts, I would be worried about the "solutions" of those if they are lower in the AHA leaderboard. A low scoring AI may definitely lie to reach its "dreams". We need to check whether the dreams are right or wrong.
There are already autonomous AI in the wild and we need to check their opinions before giving them too much power and operation freedom.