When I first used AI to answer a question, I asked it something I already know. "How do you make ...", and it gave me a completely wrong answer.
I have yet to have AI give me an answer that doesn't contain some element of bullshit.
Of course, the sources for these tidbits of misinformation are URLs that don't exist. Awesome... not.
It seems people are enamoured with something that always has an answer, even if it's flase. The fact that LLMs are programed to use the same tricks that charlatans have used for centuries is a sign too.