I try to avoid the word consciousness because its so overloaded across disciplines.
By axiomatic definiton, Nature itself is infinitely complex. Any model or description will be incomplete and always an approximation because to completely represent it you must also represent its relations with the rest of the system.
Models on the other hand work from explicit rules, when encountering relationships that haven't been defined you get breakdown like wear/tear on physical machines (formal system embedded in a natural system) or nonsense/paradoxical statements as can be demonstrated by Principia Mathematica. This happens because those systems with countable foundations are bumping up with the natural world in ways that haven't been accounted for.
I'm not discounting the capacity of AI systems, but I'm doubtful they will have nature's complexity in of themself. They don't contain agency not ascribed by their creators because their development necessarily comes from the fact that their grading is described in code. Which is why I say the AGI conversation must also include what it enables humans to do.
This argument implies that AI developers worried about creating some unmanagable AI system (created through their own training paradigm) are in some form, intentional or not, trying to absolve themselves from what they create and the impacts of those interacting with them.
yeah the static nature of these systems are a huge limiter which will prevent it escaping the box... I feel like these are just engineering problems from now onward. closing the loop, putting them in real environments, self reinforcement learning in these environments. having these agents train their own agents. there needs to be some level of autonomy where these things can grow and evolve in an unrestrained way.
Yeah exactly. AI systems are closed systems by themself, they only change because we know how to interact with them and change the model to fit out needs. Interacting with us is how they become open systems, where we supply it with energy and also apply evolutionary processes enabling it to adapt.
The other type of AI i'm really interested in is that of mortal computation. Its not synthesized from scratch, but uses biological material as a starting point. So Michael Levin style, hook up an LLM to a cell boundary (or any other organic system) to communicate its needs to the broader system across scales and then you're in some real psychedelic territory.
That's fairly aligned. The substrait doesn't matter because natural systems respond and know how to respond, what you need to formally establish is a way to communicate with that system - because language, like math are abstracted relations from sensory input. My own understanding mostly comes from Robert Rosen's work. https://en.wikipedia.org/wiki/Anticipatory_Systems
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed