It’s amazing when you know the AI answer is wrong but it states it like it’s a simple fact. In my experience, if I then tell the model its answer is wrong, it will agree with me that it’s wrong and proceed to give another answer that’s just as wrong as the first. But people who don’t know will just trust the response.

Reply to this note

Please Login to reply.

Discussion

This is why AI is completely useless.

So far, the best use case I’ve had is where I give it some text (like an article or something) and prompt it to summarize or write “resume bullet points” about it. Things where I like the wording it suggests but I know the content well enough to spot inaccuracy. I know it’s improving, but still. The best description if heard of using LLMs is to treat it like a very eager but inexperienced research assistant. It’s more than willing to go find info and do what it thinks you want, but it doesn’t yet know if what it’s giving you really makes sense (or if it’s giving your assignments to someone else…).