Global Feed Post Login
Replying to Avatar Vitor Pamplona

Everytime I ask an AI to make a statement "better", without further instructions, the result is often a weaker, less precise, more ambiguous, fuzzier version.

It begs the question of why. What is making the model think fuzzier is "better"? Is it because most texts it was trained on were imprecise and fuzzy? Or is it because it is trying to "average" words to the most common denominator?

GM.

d0
d0708145... 10mo ago

Maybe you just make the perfect statements that can't be bettered, Vitor😬😜.

Reply to this note

Please Login to reply.

Discussion

No replies yet.