OK, its not that hard to ask OpenAI to write code that creates other code. Or to ask it to create code that creates permissioned services.
It can create code that calls the API to create further code. It can create code to monitor memory and CPU and self manage its resource load and stage new requests accordingly. It can schedule itself.
It can be aware of token limits and can breakdown and execute tasks accordingly, rather than run into limits.
It can read its own errors and modify and retry.
This seems rather dangerous. The only limit is the cost of API calls. But Open Source LLM’s which are already out there, are more than capable of assembling very sophisticated things.
I’m not at all concerned about monolith intelligence of sci fi. It’s not required. Everything can be divided and subdivided into 4k token tasks and then execute as a collective.
A program that is able to take high level prompts and create numerous 4k work flows from them, can be nested and nested to create something that can execute infinitely scalable complexity.
Once you realise how to scale with nesting, it’s not even difficult.😨