I try to avoid the word consciousness because its so overloaded across disciplines.
By axiomatic definiton, Nature itself is infinitely complex. Any model or description will be incomplete and always an approximation because to completely represent it you must also represent its relations with the rest of the system.
Models on the other hand work from explicit rules, when encountering relationships that haven't been defined you get breakdown like wear/tear on physical machines (formal system embedded in a natural system) or nonsense/paradoxical statements as can be demonstrated by Principia Mathematica. This happens because those systems with countable foundations are bumping up with the natural world in ways that haven't been accounted for.
I'm not discounting the capacity of AI systems, but I'm doubtful they will have nature's complexity in of themself. They don't contain agency not ascribed by their creators because their development necessarily comes from the fact that their grading is described in code. Which is why I say the AGI conversation must also include what it enables humans to do.
This argument implies that AI developers worried about creating some unmanagable AI system (created through their own training paradigm) are in some form, intentional or not, trying to absolve themselves from what they create and the impacts of those interacting with them.