I feel we need to be careful assuming that we can get any meaning out of the actions of future AI.
The way it thinks and acts could be a complete black box at that point. There is a potentially very wide logical space it could end up landing in once it decides to move to guarantee it’s own survival.