Global Feed Post Login
Replying to Avatar Vitor Pamplona

Everytime I ask an AI to make a statement "better", without further instructions, the result is often a weaker, less precise, more ambiguous, fuzzier version.

It begs the question of why. What is making the model think fuzzier is "better"? Is it because most texts it was trained on were imprecise and fuzzy? Or is it because it is trying to "average" words to the most common denominator?

GM.

Avatar
Autumn Sun ☀️🌘☯️ 10mo ago

Widening the bell curve.

Reply to this note

Please Login to reply.

Discussion

No replies yet.