Indeed, LLMs are fun but people need to understand these are just probabilistic models that have been trained on internet posts.
It's cool but it's not AGI.
Indeed, LLMs are fun but people need to understand these are just probabilistic models that have been trained on internet posts.
It's cool but it's not AGI.
Definitely agree that they aren’t conscious. I would argue that non-consciousness is more scary than consciousness.
I can see where you're coming from in the sense that you think "woah it doesn't even *need* to be AGI to do all this."
LLMs themselves aren't a super scary concept if you research a bit into how they work though. The underlying tech isn't even super revolutionary as such. The revolution is in how they are trained more than anything else.
Yeah or “how can it act out our best interests of it doesn’t understand the concept of interests, pain, and love like we do”
For real! It’s just some code and linear algebra, arranged in a really special way. Saying it that way is reminiscent of me of amino acids and proteins.. but I don’t.. uh.. necessarily.. mean the connotations that comparison brings up.
This is something that researchers in AI are already talking about since there's also a race to be the first company to develop AGI as you can imagine.
AGI would be a black box and it will think for itself and we can't predict what type of moral values it would have.
You can't train AGI to just believe human morality (OpenAI can't even control ChatGPT, and that's just an LLM!) and even if you could... there is no "human morality" to teach it, by which I mean it differs by individual, culture, country, religion, etc.
So the TL;DR is when AGI is developed, it'll be impossible to know for sure what its true motives and beliefs are, you can't just program it to believe what you do, and even if you could, how would a company developing it dictate the "correct" morality when that's a question philosophers have been arguing over since the birth of man?