nostr:npub1pfe56vzppw077dd04ycr8mx72dqdk0m95ccdfu2j9ak3n7m89nrsf9e2dm I agree that it's rude and bad to do this, but GPT-4 has a high enough hit rate IME that this part seems like a stretch:
> These tools can’t answer questions; they mash words around, and will make up nonsense.
They definitely can answer questions. With RLHF, that is specifically what they're designed/trained to do, and they're pretty good at it in many domains. But, posting the answer without checking it is, as you say, either lying or bullshit.