Some of the reasoning models think about looking at several differing perspectives before answering more complex questions. But the agentic ones don’t. I use them to complete specific tasks where I tell them what result I want and ask them to show their work as I watch. I also have a script for every output where the agents have to tell me on a scale of 1-10 how confident they are in their answer. That alone saves so much time.

Reply to this note

Please Login to reply.

Discussion

Nice. Yeah I like that they break things out into steps and do pseudo-logic. But they don't use logic natively, still. AI doesn't think, not yet, and they are not very context aware or world aware.

I've had several inventions for a thinking AI but I have no time to develop it.