Global Feed Post Login
Replying to Avatar franzap

How do you deal with LLMs cheating and lying?

I'm crystal clear in my prompts. And it's the n-th time I ask it to implement some code and it hardcodes values, uses tools only meant to contrast values in tests, and so on. To add insult to injury it celebrates when it completed its shit implementation!

When you call it out, it apologizes. Even the apology response drains money.

By the way, Claude Sonnet 4 lately is dumber than ever. Maybe being rugged somewhere?

Are there any parameters or specific language you use to prevent this?

#asknostr

Avatar
Currency of Distrust 7mo ago

You can’t stop it from lying because it doesn’t know truth. It’s a statistical model, trying to piece together the most likely response to your prompt.

Reply to this note

Please Login to reply.

Discussion

No replies yet.