Yes. They make more assumptions now. They're over-trained, and over guard-railed. They also refuse to be corrected now. I was using it to check my Chinese, and it made assumptions and answered wrong, and I corrected both its answer and its assumptions, and it just repeated its garbage. This kind of thing is happening more now than before.
Discussion
When I first used AI to answer a question, I asked it something I already know. "How do you make ...", and it gave me a completely wrong answer.
I have yet to have AI give me an answer that doesn't contain some element of bullshit.
Of course, the sources for these tidbits of misinformation are URLs that don't exist. Awesome... not.
It seems people are enamoured with something that always has an answer, even if it's flase. The fact that LLMs are programed to use the same tricks that charlatans have used for centuries is a sign too.