The thing is, they only admit to lying because that is the answer you are most likely to accept. They don't know they lied.
Discussion
For sure. But this gets hairy with definitions. The software may not "know" but it was built by humans who may.
They just output the most likely text. This is why "pick a random number between 1 and 25 is always the same number." But it has other consequences. I can ask it. "Search the web for restaurants in my home town" and it will happily give me the results of various "searches" it did even though it is not connected to the Internet.
It doesn't know any better, it just knows what a list of search results looks like.
I'm skeptical of any claims that these models are entirely misunderstood creations that just do things willy nilly with not a single company or developer having some idea that something is fishy. I understand the whole black box theory, but I'm just saying I'm skeptical of it. There is truth to it, but it's also a super easy scapegoat for shenanigans.
And at the very least the tools should tell users explicitly that what theyre being told could be completely false. But that doesn't look as good so that isnt the case for any I've used. My first instinct would be to wonder what the fucking point is. And that IS what people should know. You have to verify sources. Most people won't and it will have massive consequences. I've already seen people IRL settle debates with these tools as if they are stone cold fact. That's wild knowing what I know.