Everytime I ask an AI to make a statement "better", without further instructions, the result is often a weaker, less precise, more ambiguous, fuzzier version.
It begs the question of why. What is making the model think fuzzier is "better"? Is it because most texts it was trained on were imprecise and fuzzy? Or is it because it is trying to "average" words to the most common denominator?
GM.
The punchline is that this post was also AI-improved, and that's why it says "begs" instead of "raises". 😂
Please Login to reply.
No, that's just me being an idiot. :)
Having just stumbled through my Duolingo lessons for the day, I'm in no position to criticize. ;)