The companies that develop an LLM would want their model to be helpful by default, assuming that people will simply switch to a competing model if it's not.
So you'll have to actively instruct it to tell you when you're wrong, as it will then assume that it is being helpful by doing so.
When I've told an LLM to point out my errors, it actually has given me decent counterpoints.