Yeah or “how can it act out our best interests of it doesn’t understand the concept of interests, pain, and love like we do”

For real! It’s just some code and linear algebra, arranged in a really special way. Saying it that way is reminiscent of me of amino acids and proteins.. but I don’t.. uh.. necessarily.. mean the connotations that comparison brings up.

Reply to this note

Please Login to reply.

Discussion

This is something that researchers in AI are already talking about since there's also a race to be the first company to develop AGI as you can imagine.

AGI would be a black box and it will think for itself and we can't predict what type of moral values it would have.

You can't train AGI to just believe human morality (OpenAI can't even control ChatGPT, and that's just an LLM!) and even if you could... there is no "human morality" to teach it, by which I mean it differs by individual, culture, country, religion, etc.

So the TL;DR is when AGI is developed, it'll be impossible to know for sure what its true motives and beliefs are, you can't just program it to believe what you do, and even if you could, how would a company developing it dictate the "correct" morality when that's a question philosophers have been arguing over since the birth of man?