Please, please do not ever respond to a post with an answer like “ChatGPT says…” or “Claude told me…”. It is very rude.

It is wrong. These tools can’t answer questions; they mash words around, and will make up nonsense. When the machine does it, it’s just gibberish, but by posting it, you’re turning it into a lie, and anyone who posts or repeats it without attribution will turn it into disinformation.

It wastes time. Now everyone has to fact-check you instead of researching the question.

Reply to this note

Please Login to reply.

Discussion

If you want, you can ask an LLM and then do your *own* fact checking of its answer, using clues you find in its output. This can be helpful, as sometimes the phrasing you think of for a search won’t be the right magic words, and an LLM can help you find those. What these tools are doing is more like “automatic free association” than “answering questions”. They can be a useful *tool* for answering questions, but the way you use them is critically important.

nostr:npub1pfe56vzppw077dd04ycr8mx72dqdk0m95ccdfu2j9ak3n7m89nrsf9e2dm somehow we've discovered the only response more annoying than "let me google that for you"

nostr:npub1pfe56vzppw077dd04ycr8mx72dqdk0m95ccdfu2j9ak3n7m89nrsf9e2dm

It's amazing how many programmers and colleagues tell me they're using it to write software (even bigger programs) because every bit of code I've gotten out of them, if it isn't just buggy and wrong, is clearly mashed together from stack overflow and blog posts.

Which, tbf, these are the kinds of people that would just mash together SO answers themselves. But by getting the LLM to do it, you also gets little hallucinated bugs and extras! And you still learn less than nothing!

nostr:npub1pfe56vzppw077dd04ycr8mx72dqdk0m95ccdfu2j9ak3n7m89nrsf9e2dm

the way i think of LLMs is as pattern translation programs. they might be able to see patterns and do a very impressive job at it.

but this doesn't mean that they can know things, only change one thing into another

nostr:npub1pfe56vzppw077dd04ycr8mx72dqdk0m95ccdfu2j9ak3n7m89nrsf9e2dm I agree that it's rude and bad to do this, but GPT-4 has a high enough hit rate IME that this part seems like a stretch:

> These tools can’t answer questions; they mash words around, and will make up nonsense.

They definitely can answer questions. With RLHF, that is specifically what they're designed/trained to do, and they're pretty good at it in many domains. But, posting the answer without checking it is, as you say, either lying or bullshit.