I’m not fixated on “winning,” and certainly not looking to drag this out. But if we’re walking back, let’s be honest about what’s being walked.
“Use more tokens, kids.”
— ynniv · 4d ago
“It’s very likely that given ‘more tokens’ in the abstract sense, current AI would eventually settle on the correct answer.”
— July 22, 2025 · 12:27 PM
“I don’t mean ‘tokens alone.’”
— July 24, 2025 · 1:10 PM
“I don’t believe, and never have, that scaling context size alone will accomplish anything.”
— July 24, 2025 · 7:53 PM
If the position was never “tokens alone,” I don’t know what to do with these earlier posts.
So I’ll ask one last time, gently:
Was “more tokens = eventual convergence” a rhetorical device, or a belief you now revise?
We probably both agree that scaling context is not equivalent to scaling reasoning and that transformers aren’t recursive, stateful, or inherently compositional.
I was only pointing out that. If we’re aligned now, we can close the loop.