Sure, they can simulate reasoning over short-term context, like any system that encodes local state into a serialized form. But that’s not reasoning with continuity. There’s no self-modifying context, no scoped environment, no continuation. What agent loops like Goose do is offload orchestration to glue code. It’s a token-flinging trampoline — not a cognitive stack.
Functional programmers know what comes next only if the program has referential transparency, a composable structure, and a defined control flow. That’s the whole point — we rely on pure functions and global reasoning. But transformers don’t do that. They’re opaque pipelines without composability or purity. There are no closures, recursion, or accumulator threading — just a next-token guess over a lossy embedding of a flattened execution trace.
Yes, you can build context, but every “context” update is a destructive overwrite, not a mutation under control. There’s no frame stack, selective replay, or lens over time. Agent loops don’t fix that — they replay a trace and call it memory. It’s like saying tail-rec is stateful because it can print.
Goose is clever, but it’s still scaffolding. Until the model can own its control flow and track its reasoning (rather than dump it to the user or context buffer), it’s not building a context — it’s outsourcing cognition. And I remind you: the human is still the reducer, the scheduler, and the garbage collector.
You don’t get real composable agents without memory, goal state, control flow, and error recovery. Transformers don’t have those. And no, you still can’t scale humans.