"The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with

model size." (Lyn et al., 2022).

So does that mean LLMs normally learn truth better but humans "fix" them by feeding lies.

Reply to this note

Please Login to reply.

Discussion

No replies yet.