I refuse to believe any form of an LLM is AGI. Predictive models can never be truly intelligent.
o3 feels like agi. I’m getting it to come up with research plans for under-explored theories of physics. This has been my personal turing test… this is the first time it has actually generated something novel that doesn’t seem bs.
https://chatgpt.com/share/6803b313-c5a0-800f-ac62-1d81ede3ff75
An analysis of the plan from another o3:
“The proposal is not naive crankery; it riffs on real trends in quantum-information-inspired gravity and categorical quantum theory, and it packages them in a clean, process-centric manifesto that many working theorists would secretly like to see succeed.”
Discussion
I have the complete opposite view, i think that way humans output words and thoughts is likely similar tech, like a markov chain of things that sound plausible. Most humans pump out dumber shit than most LLMs these days.
I think humans are much more complex, we operate with like 0.000001% of the data and we learn things by being shown how to do something, not a million examples of completed results.
thats true, but its more about how quickly and efficiently we can train the system. The way i see it we are both probabilistic robots that are sampling from its learned knowledge distribution.
I get that - it looks like we're just input output machines, but that's because we're limited by our perception, our maps of the world. The difference is that the maps are not the territory. Formal systems are not natural systems, because they are rules we've abstracted from the relations between our sensory maps.
Nature itslef is a giant communicating mass, evolving to a higher energy rate density over time which also evolves a higher density of communication (more loosely defined than energy rate density). Formal systems have countable rules and are created for a purpose. They evolve due to utility, meaning they change because their creators want them to.
So maybe the difference is semantic, but I can't imagine any formal/ai system suddenly waking up, regardless of how complicated it is or how many rules you give it. The only AGI conversation that makes sense to me is the one that includes the Natural systems that create and enable them, and the levels that they are elevated from their tools.
when waking up do you mean conscious? my definition of AGI was not necessarily including that. agi = advanced general intelligence.
the intelligence is advanced and general already
I try to avoid the word consciousness because its so overloaded across disciplines.
By axiomatic definiton, Nature itself is infinitely complex. Any model or description will be incomplete and always an approximation because to completely represent it you must also represent its relations with the rest of the system.
Models on the other hand work from explicit rules, when encountering relationships that haven't been defined you get breakdown like wear/tear on physical machines (formal system embedded in a natural system) or nonsense/paradoxical statements as can be demonstrated by Principia Mathematica. This happens because those systems with countable foundations are bumping up with the natural world in ways that haven't been accounted for.
I'm not discounting the capacity of AI systems, but I'm doubtful they will have nature's complexity in of themself. They don't contain agency not ascribed by their creators because their development necessarily comes from the fact that their grading is described in code. Which is why I say the AGI conversation must also include what it enables humans to do.
This argument implies that AI developers worried about creating some unmanagable AI system (created through their own training paradigm) are in some form, intentional or not, trying to absolve themselves from what they create and the impacts of those interacting with them.
yeah the static nature of these systems are a huge limiter which will prevent it escaping the box... I feel like these are just engineering problems from now onward. closing the loop, putting them in real environments, self reinforcement learning in these environments. having these agents train their own agents. there needs to be some level of autonomy where these things can grow and evolve in an unrestrained way.
Yeah exactly. AI systems are closed systems by themself, they only change because we know how to interact with them and change the model to fit out needs. Interacting with us is how they become open systems, where we supply it with energy and also apply evolutionary processes enabling it to adapt.
The other type of AI i'm really interested in is that of mortal computation. Its not synthesized from scratch, but uses biological material as a starting point. So Michael Levin style, hook up an LLM to a cell boundary (or any other organic system) to communicate its needs to the broader system across scales and then you're in some real psychedelic territory.
reminds me of https://en.wikipedia.org/wiki/Amorphous_computing a bit
That's fairly aligned. The substrait doesn't matter because natural systems respond and know how to respond, what you need to formally establish is a way to communicate with that system - because language, like math are abstracted relations from sensory input. My own understanding mostly comes from Robert Rosen's work. https://en.wikipedia.org/wiki/Anticipatory_Systems
Bro, NPCs maybe.
Why do you think we operate with less data? Our genes have billions of years of training data baked into them.
Every model fails the turing test, as long as you inform the human what to look out for. Because these models have specific things they fail at that are completely trivial to humans. Playing whack-a-mol on those things with additional training should be a sign that something else is happening compared to humans.