That’s an inherit flaw in how LLMs work. I noticed this effect very early on. This is why I think it’s ridiculous to use LLMs to summarize. It doesn’t fundamentally understand what it “read” and it doesn’t know how to tell you how it came up with the response, because it’s simply a statistical guess.

Reply to this note

Please Login to reply.

Discussion

No replies yet.