I would still say no on “can it be considered accurate.” It’s likely only going to be certain of an answer that is broadly accepted, not accurate.

In other words, it’s only going to give back what it was trained on. So it just never be thought of as “accurate.”

You’re strategy is decent and it’s what I started with personally, but it feel more like a “VPN” and “multiple search engine” strategy that doesn’t work as well with LLMs, imo. Still beneficial, but slower.

My recent go to is to get it to explain the “correct” answer, and then give me the best possible refutation of it. Then do the same thing for an alternative explanation. Steel man it first, then refute it with as much evidence and sources as you can. Then explain why each source, both those for the theory, and then those for the opposing or alternative, defend why they would be a reliable source or why they would be a bad source or might have a conflict of interest (I only go this far with politically controversial topics, AI answers on politically charged issues are fucking abysmal)

AI can be super useful in exploring ideas or issues, but they should be seen like a window you look through, not a fact checker or source of truth.

If you wouldn’t read an article on an idea and accept the very first answer. Then never do the same with an LLM.

Reply to this note

Please Login to reply.

Discussion

No replies yet.