It never does think for you, it only bullshits, huh?
Discussion
Some of the reasoning models think about looking at several differing perspectives before answering more complex questions. But the agentic ones don’t. I use them to complete specific tasks where I tell them what result I want and ask them to show their work as I watch. I also have a script for every output where the agents have to tell me on a scale of 1-10 how confident they are in their answer. That alone saves so much time.