If you want, you can ask an LLM and then do your *own* fact checking of its answer, using clues you find in its output. This can be helpful, as sometimes the phrasing you think of for a search won’t be the right magic words, and an LLM can help you find those. What these tools are doing is more like “automatic free association” than “answering questions”. They can be a useful *tool* for answering questions, but the way you use them is critically important.
Discussion
Using an LLM to discover search terms or get inspiration for research strategy which you can *independently verify* is like using a frying pan to cook a delicious frittata of facts. Posting the answers that it gives you as useful information that is true is like heating up the frying pan and then sticking your tongue directly on the pan in an attempt to eat it. The LLM output is the heat, not the food. Do not eat it.
nostr:npub1pfe56vzppw077dd04ycr8mx72dqdk0m95ccdfu2j9ak3n7m89nrsf9e2dm I think "automatic free association" is one of the best descriptions of LLMs I've read. Thanks for that.