Context as memory? Not quite. Memory isn’t just recalling tokens; it’s about managing evolving state. A context window is a fixed-length tape, overwriting itself continually. There’s no indexing, no selective recall, no structured management. The fact that you have to constantly restate the entire history of the plan at every step isn’t memory—it’s destructive serialization. Actual memory would be mutable, composable, persistent, and structurally addressable. Transformers have none of these traits.

Models appear to “collect information, plan, and revise”—but what’s happening there? Each new prompt round is a complete regeneration, guided by external orchestration, heuristics, or human mediation. The model itself does not understand failure, doesn’t inspect past states selectively, and doesn’t reflectively learn from error. It blindly restarts each cycle. The human (or the scaffold) chooses what the model sees next.

Avoiding local maxima? Not really. The model doesn’t even know it’s searching. It has no global evaluation function, no gradient, and no backtracking. It has only next-token probabilities based on pretrained statistics. “Local maxima” implies a structured space that the model understands. It doesn’t—it’s just sampling plausible completions based on your curated trace.

Can it seem like reasoning? Sure—but only when you’ve done the hard part (memory, scaffolding, rollback, introspection) outside the model. You see reasoning in the glue code and structure you built, not the model itself.

So yes, you’re still making the claim, but I still see no evidence of autonomous recursion, genuine stateful memory, or introspective reasoning. Context ≠ memory. Iteration ≠ recursion. Sampling ≠ structured search. And tokens ≠ for dev-hours.

But as always, I’m excited to see you build something compelling—and maybe even prove me wrong. Until then, I remain skeptical: a context window isn’t memory, and your best debugger still doesn’t scale.

Reply to this note

Please Login to reply.

Discussion

Ah, the root might be that I'm only considering models in an agent loop like goose. You're right that each inference is highly constrained. There is no memory access inside a single response. There is no (well, very limited) back tracking or external access right now. What a model spits out in one go is rather unimpressive.

But, in a series of turns, like a conversation or agent loop, there is interesting emergent behavior. Context becomes memory of previous turns. Tool use becomes a means toward and end and potentially new information. If models were stochastic parrots, this might on rare occasion result in new value, but there seems to be much more going on inside these systems, and tool use (or conversational turns) *often* results in new value in what I can only conceive of as reasoning.

Goose can curate its own memories. It can continue taking turns until it has a question, or decides the task is complete. It can look things up on the web, or write throwaway code to test a theory. Most of the time things fail it's because expectations were not set accordingly, or the structure of the system didn't provide the resources necessary for success. This is why I ask, what if the problem is that people aren't using enough tokens?

In long conversations with Claude I have seen all manner of outputs that suggest capabilities which far exceed what people claim LLMs "can do". (Well, what most people claim, because there are some people who go straight out the other end and Eliza themselves.)

What concerns me the most is that these capabilities continue to grow, and almost no one seems to notice. It's like the closer someone is to the systems, the more they think that they understand how they work. The truth is that these are (to use Wolfram's terminology) randomly mining the computational space, and the resulting system is irreducible. Which is to say, no one has any idea what's going on in those hidden layers. Anything that *could* happen *might* happen.

The only way to know is to experiment with them – and my informal experiments suggest that they're already AGI (albeit one with amnesia and no inherent agency).

Wherever this is all going, it's moving quickly. Stay buoyant 🌊

Glad we’ve arrived at a similar perspective now—it feels like progress. To clarify my original confusion:

When you initially wrote, “What if vibe coders just aren’t using enough tokens?”, you seemed to imply that tokens alone—without mentioning loops, scaffolding, external memory, or agent orchestration—would inherently unlock genuine reasoning and recursion inside transformers.

We're perfectly aligned if your real point always included external loops, scaffolding, or agent architectures like Goose (rather than just “tokens alone”). But I definitely didn’t get that from your first post, given its explicit wording. Thanks for explicitly clarifying your stance here.

Working with LLMs has given me a first class notion of context. It's a strange new idea to me that's also changed how I approach conversations.

Our expectations around an agent loop does seem to be the root of it. Do people vibe code without such a thing though? I'll admit that I'm spoiled, since I started to use goose over 18 months ago I never bothered to try the other popular things that are more than Co-Pilot and less than goose, like Cursor

That is fair, and I think you’re touching exactly on the heart of the issue here.

Your recent experiences with Goose and these richer agent loops highlight what I pointed out: it’s not the quantity of tokens alone that unlocks genuine reasoning and recursion. Instead, reasoning emerges from loops, external memory, scaffolding, and orchestration—precisely as you implicitly acknowledge here by talking about agent loops as a requirement, rather than a luxury.

I appreciate that you’ve implicitly clarified this:

“Tokens alone” aren’t the root solution; structured loops and scaffolding around the transformer architecture are.

Thanks for a thoughtful conversation! It genuinely feels like we’ve arrived at the correct conclusion.