In case it still hasn’t reached your circle: no, probabilistic language models are not going to take over. They genuinely can’t even count.
Discussion
Indeed, LLMs are fun but people need to understand these are just probabilistic models that have been trained on internet posts.
It's cool but it's not AGI.
Definitely agree that they aren’t conscious. I would argue that non-consciousness is more scary than consciousness.
I can see where you're coming from in the sense that you think "woah it doesn't even *need* to be AGI to do all this."
LLMs themselves aren't a super scary concept if you research a bit into how they work though. The underlying tech isn't even super revolutionary as such. The revolution is in how they are trained more than anything else.
Yeah or “how can it act out our best interests of it doesn’t understand the concept of interests, pain, and love like we do”
For real! It’s just some code and linear algebra, arranged in a really special way. Saying it that way is reminiscent of me of amino acids and proteins.. but I don’t.. uh.. necessarily.. mean the connotations that comparison brings up.
This is something that researchers in AI are already talking about since there's also a race to be the first company to develop AGI as you can imagine.
AGI would be a black box and it will think for itself and we can't predict what type of moral values it would have.
You can't train AGI to just believe human morality (OpenAI can't even control ChatGPT, and that's just an LLM!) and even if you could... there is no "human morality" to teach it, by which I mean it differs by individual, culture, country, religion, etc.
So the TL;DR is when AGI is developed, it'll be impossible to know for sure what its true motives and beliefs are, you can't just program it to believe what you do, and even if you could, how would a company developing it dictate the "correct" morality when that's a question philosophers have been arguing over since the birth of man?
They can though use a calculator, and know when they should use one. The fact that they don’t have x internal ability is less important if they can easily externalize that task, right?
What can they not do that also can’t be looked up via an API?
I read the conversation you and #[3] had and appreciate both perspectives so don’t really want to rehash it all here. Just a fun example to illustrate my (light hearted) point that I just made with gpt4: 
If you know anybody Who likes foot pictures plese share : on my profile you can zap for foot pics.. Its cheap and anyone can vote how compoziton should look
Maybe I’m using this app wrong but that comes as a pleasant surprise. Yeah I might talk about this topic too much.. but given the rate of improvement, and the potential ROI for winners in the space (not to mention potential loss for the losers), it feels immensely pressing.
Love examples like that though. The current stuff is decidedly dumb in some ways. I maybe AutoGPT would have noticed and recommended an edit.