If you use Ai by just asking a question and then accepting whatever it says, *especially* if there is any controversy or novelty around the idea, then you may have a massively inaccurate mental picture of what LLMs are and how they work.
Discussion
You have to ask like 5 times to get real answers if the question is spicy
I have a slightly different strategy, but forcing them to keep digging does help to just break the high level âeasy weightsâ answer.
https://primal.net/e/nevent1qqs0u7fve6xupnsn6gas32e3dt63xk898wlh4f4p8s745cssl82eacsng6cuv
They forget that we invented computers to achieve accurate results. And the most important role of the human is to take responsibility, not writing random stuff.
What if you use different AIs, ask from different perspectives, use âdeep searchâ mode and tell him to think again severas times?
The answer they provide can be consider acurate?
I would still say no on âcan it be considered accurate.â Itâs likely only going to be certain of an answer that is broadly accepted, not accurate.
In other words, itâs only going to give back what it was trained on. So it just never be thought of as âaccurate.â
Youâre strategy is decent and itâs what I started with personally, but it feel more like a âVPNâ and âmultiple search engineâ strategy that doesnât work as well with LLMs, imo. Still beneficial, but slower.
My recent go to is to get it to explain the âcorrectâ answer, and then give me the best possible refutation of it. Then do the same thing for an alternative explanation. Steel man it first, then refute it with as much evidence and sources as you can. Then explain why each source, both those for the theory, and then those for the opposing or alternative, defend why they would be a reliable source or why they would be a bad source or might have a conflict of interest (I only go this far with politically controversial topics, AI answers on politically charged issues are fucking abysmal)
AI can be super useful in exploring ideas or issues, but they should be seen like a window you look through, not a fact checker or source of truth.
If you wouldnât read an article on an idea and accept the very first answer. Then never do the same with an LLM.
Nah my Guy
I keep challenging it with logic until it gives me the answer that I want.
đ¤Łđ
Indeed
I heard people putting screenshots of bitcoin price charts and asking AI for price predictions
In that case, you're not really asking the AI for its price prediction. Instead, you're asking it to predict what "technical analysts" would say about the price
And that's like asking astrologers. In a given field of (non-)expertise, the AI doesn't know how reliable the field is. It just predicts what humans would say in that field
AI is very good at writing software because the humans behind the AI know how to point the AI at reliable resources (like StackOverflow and Wikipedia) and because it's easier to see if a particular piece of software is working or not
The humans behind AI software want LLMs that can write good software, so that the LLM can improve itself, and therefore they constantly test and optimise that
Better find out how aligned is training data for that đ
At least, when it gets caught in a lie, it doesn't double down. Thats more than I can say for most humans. 