Appreciate the clarification attempts. But to be fair, this all started with a confident claim that “more tokens” would eventually get us there — not loops, not memory, not scaffolding — just “tokens,” full stop. That’s not a strawman; it’s quoted:
“It’s very likely that given ‘more tokens’ in the abstract sense, current AI would eventually settle on the correct answer.”
— Posted July 22, 2025 · 12:27 PM
And in case that was too subtle, a few days earlier:
“Use more tokens, kids.”
— ynniv · 4d ago
This was in direct reply to:
“You’re not scaling reasoning, you’re scaling cache size.”
— Itamar Peretz · July 20, 2025 · 08:37 AM
If your view has since changed to “I don’t mean tokens alone” (July 24, 2025 · 1:10 PM), that’s totally fair — we all evolve our thinking. But that’s not what was argued initially. And if we’re now rewriting the premise retroactively, let’s just acknowledge that clearly.
So here’s the fulcrum:
Do you still believe that scaling token count alone (in the abstract) leads current LLMs to the correct answer, regardless of architectural constraints like stateless inference, lack of global memory, or control flow?
• If yes, then respectfully, that contradicts how transformers actually work. You’re scaling width, not depth.
• If no, then we’re in agreement — and the original claim unravels on its own.
In either case, worth remembering: you can’t scale humans. And that’s still what fills the reasoning gaps in these loops.