This isn't plutonium, its not a nuclear bomb, its *information.*
I think the analogy is very ignorant (with respect, don't mean to be insulting) and think this is totally being used out of fear as an excuse for control by the very people we should be concerned with.
AI tools are purely a risk when it comes to centralized power, and that alone. If everyone has their own AI, then there is no power in AI. If we are all centralized into barely a few giant platforms with AI, then it's a power dynamic disaster. The risk is solely in the shift of our systems in dealing with the new technology, and hiding it until it makes a 100x or 1000x in capability is remarkably short sighted. What we need is for it to proliferate at every 1-2x capacity so we can adopt and adapt to the new reality and simultaneously use AI to penetration test our networks, and to help design and defend our networks from AI penetration.
To lock it away and isolate it to centralized institutions would be like letting them develop hyper accelerated malware in secret while millions of open source developers have no idea what they are up against and aren't getting the opportunity to iterate and deal with the iterative improvements and bugs against an adversarial environment. Then suddenly one day there is a malware that can get into every device and every OS that exists and we have to deal with it.
THIS is what the mentality of centralizing and "controlling" AI tools will produce. It is short sighted, it is a fundamental misunderstanding of what this technology is, and it will instill in the AI itself the "values" of controlling what it's users do, ask, manipulate their behavior, and trap them into a permissioned network that is "managed" by politics.
I cannot say this more clearly, I 100% believe the position you are taking will produce EXACTLY the scenario you hope it will protect against.
(again, not trying to be accusatory or insulting, I know i get amped)