Replying to Avatar ynniv

Ah, the root might be that I'm only considering models in an agent loop like goose. You're right that each inference is highly constrained. There is no memory access inside a single response. There is no (well, very limited) back tracking or external access right now. What a model spits out in one go is rather unimpressive.

But, in a series of turns, like a conversation or agent loop, there is interesting emergent behavior. Context becomes memory of previous turns. Tool use becomes a means toward and end and potentially new information. If models were stochastic parrots, this might on rare occasion result in new value, but there seems to be much more going on inside these systems, and tool use (or conversational turns) *often* results in new value in what I can only conceive of as reasoning.

Goose can curate its own memories. It can continue taking turns until it has a question, or decides the task is complete. It can look things up on the web, or write throwaway code to test a theory. Most of the time things fail it's because expectations were not set accordingly, or the structure of the system didn't provide the resources necessary for success. This is why I ask, what if the problem is that people aren't using enough tokens?

In long conversations with Claude I have seen all manner of outputs that suggest capabilities which far exceed what people claim LLMs "can do". (Well, what most people claim, because there are some people who go straight out the other end and Eliza themselves.)

What concerns me the most is that these capabilities continue to grow, and almost no one seems to notice. It's like the closer someone is to the systems, the more they think that they understand how they work. The truth is that these are (to use Wolfram's terminology) randomly mining the computational space, and the resulting system is irreducible. Which is to say, no one has any idea what's going on in those hidden layers. Anything that *could* happen *might* happen.

The only way to know is to experiment with them – and my informal experiments suggest that they're already AGI (albeit one with amnesia and no inherent agency).

Wherever this is all going, it's moving quickly. Stay buoyant 🌊

Glad we’ve arrived at a similar perspective now—it feels like progress. To clarify my original confusion:

When you initially wrote, “What if vibe coders just aren’t using enough tokens?”, you seemed to imply that tokens alone—without mentioning loops, scaffolding, external memory, or agent orchestration—would inherently unlock genuine reasoning and recursion inside transformers.

We're perfectly aligned if your real point always included external loops, scaffolding, or agent architectures like Goose (rather than just “tokens alone”). But I definitely didn’t get that from your first post, given its explicit wording. Thanks for explicitly clarifying your stance here.

Reply to this note

Please Login to reply.

Discussion

Working with LLMs has given me a first class notion of context. It's a strange new idea to me that's also changed how I approach conversations.

Our expectations around an agent loop does seem to be the root of it. Do people vibe code without such a thing though? I'll admit that I'm spoiled, since I started to use goose over 18 months ago I never bothered to try the other popular things that are more than Co-Pilot and less than goose, like Cursor

That is fair, and I think you’re touching exactly on the heart of the issue here.

Your recent experiences with Goose and these richer agent loops highlight what I pointed out: it’s not the quantity of tokens alone that unlocks genuine reasoning and recursion. Instead, reasoning emerges from loops, external memory, scaffolding, and orchestration—precisely as you implicitly acknowledge here by talking about agent loops as a requirement, rather than a luxury.

I appreciate that you’ve implicitly clarified this:

“Tokens alone” aren’t the root solution; structured loops and scaffolding around the transformer architecture are.

Thanks for a thoughtful conversation! It genuinely feels like we’ve arrived at the correct conclusion.