yeah the static nature of these systems are a huge limiter which will prevent it escaping the box... I feel like these are just engineering problems from now onward. closing the loop, putting them in real environments, self reinforcement learning in these environments. having these agents train their own agents. there needs to be some level of autonomy where these things can grow and evolve in an unrestrained way.
Discussion
Yeah exactly. AI systems are closed systems by themself, they only change because we know how to interact with them and change the model to fit out needs. Interacting with us is how they become open systems, where we supply it with energy and also apply evolutionary processes enabling it to adapt.
The other type of AI i'm really interested in is that of mortal computation. Its not synthesized from scratch, but uses biological material as a starting point. So Michael Levin style, hook up an LLM to a cell boundary (or any other organic system) to communicate its needs to the broader system across scales and then you're in some real psychedelic territory.
reminds me of https://en.wikipedia.org/wiki/Amorphous_computing a bit
That's fairly aligned. The substrait doesn't matter because natural systems respond and know how to respond, what you need to formally establish is a way to communicate with that system - because language, like math are abstracted relations from sensory input. My own understanding mostly comes from Robert Rosen's work. https://en.wikipedia.org/wiki/Anticipatory_Systems