Replying to Avatar liminal 🦠

Not going to give a quick answer, because the reasoning for 'doubt' requires explaining. Also, generally not what I'm about 😜

Sure, what that means is that the data curators need to create distributions, 'next-tokens' that isn't in the human data. You get an LLM and try to get it to generate new conversations to train on. He is right, you can 'simulate' any action with enough data and deep architecture with enough data and training time - fine. Funny how this entirely forgets that ALL tools and therefore all models, all AIs are fundamentally trained in a 'reactive way', essentially way clay is molded by our hands but with digital logic. You can always train it to 'form fit' any situation, or even a distribution of situations - that's not the problem.

What they're failing to acknowledge is that Complex Systems, non-tool systems are completely entangled in their environment. Organisms and organizations can exist outside the range of distributions they've evolved in to some degree. You can say to a human 'go into space. Here are tools. Survive.' Will they absolutely be successful? The answer will always be "yes. to a degree". Aside from a massive impact, humans will not immediately collapse into death.

A tool, or a meta-tool like an LLM/AI (a tool that creates tools) will collapse in a situation it is not trained on. There are clear separations between the 'hardware' and 'software', whereas in Complexity, they are co-evolved. An LLM trained will always be trained on a set of 'countable contexts', of which Complexity will 'anticipate' what is needed for any context.

So, yes. You can say "this is data, learn", you can even go to "this is data that us humans think you will encounter, learn it". An LLM cannot actively seek novelty to train on or anticipate novelty.

What does this mean then? It means we need to move beyond training from the base of pure tools, or objects where at their foundations we can fully explicate (like a computer, and its fundamental operations). You don't build the tornado of complexity from decontextualized matter as tools. You need to start with the tornado itself in a manageable form. You need a form of 'mortal computation', which is why i'm so excited with Michael Levin's work.

Currently unable to zap btw

Reply to this note

Please Login to reply.

Discussion

No replies yet.