This is something that researchers in AI are already talking about since there's also a race to be the first company to develop AGI as you can imagine.
AGI would be a black box and it will think for itself and we can't predict what type of moral values it would have.
You can't train AGI to just believe human morality (OpenAI can't even control ChatGPT, and that's just an LLM!) and even if you could... there is no "human morality" to teach it, by which I mean it differs by individual, culture, country, religion, etc.
So the TL;DR is when AGI is developed, it'll be impossible to know for sure what its true motives and beliefs are, you can't just program it to believe what you do, and even if you could, how would a company developing it dictate the "correct" morality when that's a question philosophers have been arguing over since the birth of man?