Taken literally, we agree. What seems to be happening is that people vibe code something, it doesn't work, and they declare that AI "isn't real yet". Another defeatist take is to ask a very specific question that you know very well, and watch it inevitably come back with a lame answer.
What I want people to notice is that most things are hard. It's very likely that given "more tokens" in the abstract sense, current AI would eventually settle on the correct answer.
It's important to realize this because even if something takes an LLM agent two days and $200 worth of tokens, the same task would probably take a person weeks or months and cost an order of magnitude more.
And that's just today. Actually, that was just last week, because Kimi-K2 and Qwen Coder can basically do what Claude Sonnet does for 1/10 the token cost, and it isn't going to stop there.
Stay buoyant 🌊