What if vibe coders just aren't using enough tokens?

Reply to this note

Please Login to reply.

Discussion

Big tech hates this one simple trick

Use more tokens, kids

More tokens are like wider payloads in a stateless microservice: helpful for packing more context, but irrelevant to the core bottleneck of coordination. Transformer architectures have no built-in concept of shared memory, global state, or structured control flow. Each inference is an isolated forward pass—no read/write memory, pointer, or continuation stack. You’re not scaling reasoning, you’re scaling cache size.

Retrieval-augmented generation (RAG), memory modules, tool use, and planner loops are all attempts to bolt on simulated memory using external systems. But simulation isn’t integration. These techniques lack the core properties of real memory systems: mutable state, consistency guarantees, selective recall, and scoped invalidation. They resemble distributed systems without a proper coordination layer—pure gossip, no consensus.

Even long context windows (e.g., 200K tokens) offer no relief. A larger bucket of past tokens doesn’t change the model’s inability to prioritize, reference, or route thoughts across time. Attention is dense or sparse, but never deliberate. There’s no working memory stack. No symbolic manipulation. No instruction pointer. Just statistical guesswork smoothed over a flat vector space.

Multi-agent systems? LangGraph, AutoGPT, BabyAGI? They’re distributed loopers. Agents pass outputs to each other like logs in a pipeline, with no theory of mind, negotiation, or shared ontology. There’s no grounding, no meta-cognition, and no reflection. You can script a workflow, but the agents aren’t thinking together. They’re just taking turns hallucinating.

And let’s not pretend you can offload this to the user. The human remains the I/O controller, the debugger, the scheduler, and the final consensus engine. There is no autoscaler for cognition. You can shard your microservices, but you can’t shard your prefrontal cortex.

In Sussman's terms, there’s no procedure. In Minsky's terms, there’s no society—just a bunch of disconnected hacks guessing the next plausible token. This isn’t referentially transparent or composable in functional programming terms—it’s side-effect soup.

Until models can maintain evolving, contextual state, abstract their reasoning paths, and coordinate across agents with shared intent and memory, they won’t replace the human-in-the-loop for complex tasks. They’ll assist, autocomplete, and sometimes dazzle—but they won’t reason. More tokens won’t fix that. It’s an architectural limitation, not a throughput problem.

They very much can reason inside a context window. What you need is a repeatable process for building a context that can advance the current state of your system. Agent loops (eg, goose) can do this.

Once you've established a goal and a way to reliably make progress toward it, every functional programmer knows what comes next

Sure, they can simulate reasoning over short-term context, like any system that encodes local state into a serialized form. But that’s not reasoning with continuity. There’s no self-modifying context, no scoped environment, no continuation. What agent loops like Goose do is offload orchestration to glue code. It’s a token-flinging trampoline — not a cognitive stack.

Functional programmers know what comes next only if the program has referential transparency, a composable structure, and a defined control flow. That’s the whole point — we rely on pure functions and global reasoning. But transformers don’t do that. They’re opaque pipelines without composability or purity. There are no closures, recursion, or accumulator threading — just a next-token guess over a lossy embedding of a flattened execution trace.

Yes, you can build context, but every “context” update is a destructive overwrite, not a mutation under control. There’s no frame stack, selective replay, or lens over time. Agent loops don’t fix that — they replay a trace and call it memory. It’s like saying tail-rec is stateful because it can print.

Goose is clever, but it’s still scaffolding. Until the model can own its control flow and track its reasoning (rather than dump it to the user or context buffer), it’s not building a context — it’s outsourcing cognition. And I remind you: the human is still the reducer, the scheduler, and the garbage collector.

You don’t get real composable agents without memory, goal state, control flow, and error recovery. Transformers don’t have those. And no, you still can’t scale humans.

Successful recursion only requires a goal and something that progresses toward the goal. You'd be surprised how effective LLMs can be at avoiding local maxima

Recursion without a frame stack is just token looping. The existence of a goal and a transition function does not a reasoner make—unless you track progress, memoize state, and backtrack on error. LLMs do none of these.

Transformers don’t recurse. They unfold. There’s no call stack, no accumulator, no return. There’s no control flow to introspect or manipulate. Every “recursive step” is a new prompt, built via lossy serialization of the prior step’s output. That’s repetition, not recursion.

Avoiding local maxima? Only insofar as random noise avoids overfitting. There’s no global optimization happening—just stochastic sampling from a surface shaped by human priors. You might get diversity, but you don’t get convergence. LLMs don’t do beam search across goals. They don’t retry, compare, or self-reflect. There’s no regret minimization loop.

And even if you wrap them in agent loops with scratchpads and tree searches, you’re building a poor man’s blackboard system—bad Lisp with no memory model. It's not reasoning until the model can inspect, compare, and revise its intermediate state without human mediation. It’s just regurgitating scaffolding we built to give the illusion of momentum.

So yes, you can define a “goal” and a “step function.” But you don't have recursion unless you have state, memory, checkpointing, and rollback. You have an uncontrolled loop over sampled surface noise.

And no matter how many tokens you throw at it, you still can’t scale the human running the debugger.

We could hypothesize how capable an agent system is all day long. I'm just going to build with it and let others judge how well it worked

That’s fair — building is the ultimate test. But if the architecture lacks the primitives, then what you’re evaluating isn’t the agent’s reasoning capacity. You can paper over those limits with scaffolding, retries, heuristics, and human feedback.

And I agree: it can look impressive. Many of us have built loops that do surprising things. But when they fail, they fail like a maze with no map: no rollback, blame assignment, or introspection—just a soft collapse into token drift.

So yes, build it. Just don’t mistake clever orchestration for capability. And when it breaks, remember why: stateless inference has no recursion, memory, or accountability.

I hope you do build something great — and I’ll be watching. But if the agents start hallucinating and spinning in circles, I won’t say, “I told you so.” I’ll ask if your debugger is getting tired and remind you that you still can’t scale the human.

Thinking about your position, we might not be meeting in the same place. My expectation isn't a system that grants wishes, but one that amplifies capabilities by a hundred, or a thousand. It's funny to me when replit deletes someone's production database because *thats how software works*. If you already know this, you know to build separate environments and authorization. Does the freelance contractor write poor, lazy code? Of course it does: that's why you review the code. But, you can still use a different freelance contractor to review it if you know how to ask the right questions.

vibe coding is the closest thing we have to rocket surgery. It's both incredible and terrible, and it's your job to captain the ship accordingly 🌊

Totally with you on captaining the ship. I’d never argue against using LLMs as amplifiers — they’re astonishing in the right hands, and yes, it’s our job to chart around the rocks. But that’s the thing: if we’re steering, supervising, checkpointing, and debugging, then we’re not talking about autonomous reasoning agents. We’re talking about a very talented, very unreliable deckhand.

This brings us back gently to where this all started: can vibe coders reason? If your answer now is “not exactly, but they can help you move faster if you already know where you’re going,” maybe we’ve converged. Because that’s all I was ever arguing.

You don’t scale reasoning by throwing tokens at it. You scale vibes. And someone still has to read the logs, reroute the stack, and fix the hull mid-sail.

Where I was going with "more tokens" is growing past zero shot expectations. I see models reasoning every day, so to say that they can't reason is the wrong path. But, the gold standard of "general intelligence" isn't good at writing software either. You wouldn't expect a junior dev to one shot a React app, or hot patch a bug in production. You need more process, more analysis, more constraint, in order to build good things. In life we call these dev-hours, but in this new reality they're called tokens. Doing something difficult will require a certain amount of effort. That investment is not sufficient, but it is necessary. Vibe coders who have never written software before wont understand what needs to be done, and where it needs to be done, in order to achieve the success that they're looking for. But, models are getting better every month now. By my estimation it won't be long before they are better at captaining than we are. If so, vibe coding will become a reality – and even if we aren't there today, it will take us longer to understand how to use these tools than it will for the tools to become useful.

onward 🌊

Glad we’re converging—because that’s the heart of it: we agree on amplification, but differ on the mechanics. Initially, your stance was stronger, claiming that these models were actively reasoning and recursing internally, escaping local maxima through real inference. We seem to agree they’re powerful tools that amplify our capabilities, rather than autonomous reasoners.

My original point wasn’t that LLMs are ineffective; it was just that more tokens alone don’t yield reasoning. Amplification is profound but fundamentally different from real autonomous recursion or stable reasoning. The model’s architecture still lacks structured state, introspection, and genuine memory management.

I agree, though—these tools are moving quickly. Maybe they’ll soon surprise us both, and vibe coding might become rocket surgery. Until then, I’m happy sailing alongside you, captaining through the chaos and figuring it out as we go. 🌊

No, I'm still making that claim.

Is context not memory? Have you not seen a model collect information, make a plan, begin to implement it, find something doesn't work, design an experiment, use the results to rewrite the plan, and then execute the new plan successfully? Is this somehow not "reasoning to avoid a local maxima"?

Context as memory? Not quite. Memory isn’t just recalling tokens; it’s about managing evolving state. A context window is a fixed-length tape, overwriting itself continually. There’s no indexing, no selective recall, no structured management. The fact that you have to constantly restate the entire history of the plan at every step isn’t memory—it’s destructive serialization. Actual memory would be mutable, composable, persistent, and structurally addressable. Transformers have none of these traits.

Models appear to “collect information, plan, and revise”—but what’s happening there? Each new prompt round is a complete regeneration, guided by external orchestration, heuristics, or human mediation. The model itself does not understand failure, doesn’t inspect past states selectively, and doesn’t reflectively learn from error. It blindly restarts each cycle. The human (or the scaffold) chooses what the model sees next.

Avoiding local maxima? Not really. The model doesn’t even know it’s searching. It has no global evaluation function, no gradient, and no backtracking. It has only next-token probabilities based on pretrained statistics. “Local maxima” implies a structured space that the model understands. It doesn’t—it’s just sampling plausible completions based on your curated trace.

Can it seem like reasoning? Sure—but only when you’ve done the hard part (memory, scaffolding, rollback, introspection) outside the model. You see reasoning in the glue code and structure you built, not the model itself.

So yes, you’re still making the claim, but I still see no evidence of autonomous recursion, genuine stateful memory, or introspective reasoning. Context ≠ memory. Iteration ≠ recursion. Sampling ≠ structured search. And tokens ≠ for dev-hours.

But as always, I’m excited to see you build something compelling—and maybe even prove me wrong. Until then, I remain skeptical: a context window isn’t memory, and your best debugger still doesn’t scale.

Ah, the root might be that I'm only considering models in an agent loop like goose. You're right that each inference is highly constrained. There is no memory access inside a single response. There is no (well, very limited) back tracking or external access right now. What a model spits out in one go is rather unimpressive.

But, in a series of turns, like a conversation or agent loop, there is interesting emergent behavior. Context becomes memory of previous turns. Tool use becomes a means toward and end and potentially new information. If models were stochastic parrots, this might on rare occasion result in new value, but there seems to be much more going on inside these systems, and tool use (or conversational turns) *often* results in new value in what I can only conceive of as reasoning.

Goose can curate its own memories. It can continue taking turns until it has a question, or decides the task is complete. It can look things up on the web, or write throwaway code to test a theory. Most of the time things fail it's because expectations were not set accordingly, or the structure of the system didn't provide the resources necessary for success. This is why I ask, what if the problem is that people aren't using enough tokens?

In long conversations with Claude I have seen all manner of outputs that suggest capabilities which far exceed what people claim LLMs "can do". (Well, what most people claim, because there are some people who go straight out the other end and Eliza themselves.)

What concerns me the most is that these capabilities continue to grow, and almost no one seems to notice. It's like the closer someone is to the systems, the more they think that they understand how they work. The truth is that these are (to use Wolfram's terminology) randomly mining the computational space, and the resulting system is irreducible. Which is to say, no one has any idea what's going on in those hidden layers. Anything that *could* happen *might* happen.

The only way to know is to experiment with them – and my informal experiments suggest that they're already AGI (albeit one with amnesia and no inherent agency).

Wherever this is all going, it's moving quickly. Stay buoyant 🌊

Glad we’ve arrived at a similar perspective now—it feels like progress. To clarify my original confusion:

When you initially wrote, “What if vibe coders just aren’t using enough tokens?”, you seemed to imply that tokens alone—without mentioning loops, scaffolding, external memory, or agent orchestration—would inherently unlock genuine reasoning and recursion inside transformers.

We're perfectly aligned if your real point always included external loops, scaffolding, or agent architectures like Goose (rather than just “tokens alone”). But I definitely didn’t get that from your first post, given its explicit wording. Thanks for explicitly clarifying your stance here.

Working with LLMs has given me a first class notion of context. It's a strange new idea to me that's also changed how I approach conversations.

Our expectations around an agent loop does seem to be the root of it. Do people vibe code without such a thing though? I'll admit that I'm spoiled, since I started to use goose over 18 months ago I never bothered to try the other popular things that are more than Co-Pilot and less than goose, like Cursor

That is fair, and I think you’re touching exactly on the heart of the issue here.

Your recent experiences with Goose and these richer agent loops highlight what I pointed out: it’s not the quantity of tokens alone that unlocks genuine reasoning and recursion. Instead, reasoning emerges from loops, external memory, scaffolding, and orchestration—precisely as you implicitly acknowledge here by talking about agent loops as a requirement, rather than a luxury.

I appreciate that you’ve implicitly clarified this:

“Tokens alone” aren’t the root solution; structured loops and scaffolding around the transformer architecture are.

Thanks for a thoughtful conversation! It genuinely feels like we’ve arrived at the correct conclusion.

Taken literally, we agree. What seems to be happening is that people vibe code something, it doesn't work, and they declare that AI "isn't real yet". Another defeatist take is to ask a very specific question that you know very well, and watch it inevitably come back with a lame answer.

What I want people to notice is that most things are hard. It's very likely that given "more tokens" in the abstract sense, current AI would eventually settle on the correct answer.

It's important to realize this because even if something takes an LLM agent two days and $200 worth of tokens, the same task would probably take a person weeks or months and cost an order of magnitude more.

And that's just today. Actually, that was just last week, because Kimi-K2 and Qwen Coder can basically do what Claude Sonnet does for 1/10 the token cost, and it isn't going to stop there.

Stay buoyant 🌊

I appreciate the clarification — it confirms that the original claim was about tokens alone. Given enough tokens, current LLMs will eventually arrive at the correct answer, regardless of whether they have memory, structured loops, or agent scaffolding.

But that’s precisely where we differ. Increasing the number of tokens expands cache size, not capability. To use a metaphor, transformer inference remains a stateless forward pass — no structured memory, call stack, global state, or persistent reasoning—just a bigger microservice payload.

If reasoning occurs, you’ve added an agent loop, scaffold, or retrieval — a system that uses tokens but is not solely just tokens. These aren’t accidents; they’re part of the architecture.

So we’re left with two incompatible views:

1. “Tokens alone” eventually suffice (your original assertion),

2. Or they don’t — and the real breakthrough lies in the surrounding structure, which we build because tokens alone are inadequate.

Happy to debate this distinction, but we should probably choose one. Otherwise, we’re just vibing our way through epistemology 😄

I don't mean "tokens alone"

Appreciate the clarification attempts. But to be fair, this all started with a confident claim that “more tokens” would eventually get us there — not loops, not memory, not scaffolding — just “tokens,” full stop. That’s not a strawman; it’s quoted:

“It’s very likely that given ‘more tokens’ in the abstract sense, current AI would eventually settle on the correct answer.”

— Posted July 22, 2025 · 12:27 PM

And in case that was too subtle, a few days earlier:

“Use more tokens, kids.”

— ynniv · 4d ago

This was in direct reply to:

“You’re not scaling reasoning, you’re scaling cache size.”

— Itamar Peretz · July 20, 2025 · 08:37 AM

If your view has since changed to “I don’t mean tokens alone” (July 24, 2025 · 1:10 PM), that’s totally fair — we all evolve our thinking. But that’s not what was argued initially. And if we’re now rewriting the premise retroactively, let’s just acknowledge that clearly.

So here’s the fulcrum:

Do you still believe that scaling token count alone (in the abstract) leads current LLMs to the correct answer, regardless of architectural constraints like stateless inference, lack of global memory, or control flow?

• If yes, then respectfully, that contradicts how transformers actually work. You’re scaling width, not depth.

• If no, then we’re in agreement — and the original claim unravels on its own.

In either case, worth remembering: you can’t scale humans. And that’s still what fills the reasoning gaps in these loops.

I don't believe, and never have, that scaling context size alone will accomplish anything. I do believe, and always have, that people give up too early. I'm not sure why you're fixated on "winning" this argument – it's not an argument per se, and there are better things to do right now

I’m not fixated on “winning,” and certainly not looking to drag this out. But if we’re walking back, let’s be honest about what’s being walked.

“Use more tokens, kids.”

— ynniv · 4d ago

“It’s very likely that given ‘more tokens’ in the abstract sense, current AI would eventually settle on the correct answer.”

— July 22, 2025 · 12:27 PM

“I don’t mean ‘tokens alone.’”

— July 24, 2025 · 1:10 PM

“I don’t believe, and never have, that scaling context size alone will accomplish anything.”

— July 24, 2025 · 7:53 PM

If the position was never “tokens alone,” I don’t know what to do with these earlier posts.

So I’ll ask one last time, gently:

Was “more tokens = eventual convergence” a rhetorical device, or a belief you now revise?

We probably both agree that scaling context is not equivalent to scaling reasoning and that transformers aren’t recursive, stateful, or inherently compositional.

I was only pointing out that. If we’re aligned now, we can close the loop.

That’s a great blog post — I actually like it.

But let’s not mistake narrative for argument. I’m not disputing that experimentation, iteration, and persistence can lead to real progress. In fact, I’d argue that’s precisely why it’s worth being clear on what is being tried.

My only point is that your original phrasing clearly emphasized tokens:

“Use more tokens, kids.”

“Given enough tokens… current AI would eventually settle on the correct answer.”

Then later, you clarified:

“I don’t mean ‘tokens alone’.”

If that was always your intent — that architectural context (loops, agents, structure) matters more than just throwing tokens — I think we’re in violent agreement.

But let’s not retroactively apply that nuance to the initial bold claim unless that was the design all along.

Persistence is valuable, yes. But clarity helps the rest of us persist in the right direction.