Yeah, I'm far more on the concerned side of the equation than most people I know. I hope I'm wrong. And I'd rather be wrong about it being dangerous than wrong about it being safe.
Discussion
The mere fact that something “intelligent” knows that we are concerned and may be making backup plans to destroy it just in case, should give it enough reason to actually destroy us. Maybe it figures just waiting around is the best way to do it - let us destroy ourselves. That might be the best case scenario for us.
I’m super concerned and I don’t see anyone nearly as concerned, and that’s scary.
It's because many people are using a a anthropomorphic standard to try and measure the danger. This is silly and wrong. Whether an large language model is conscious or not, or self-aware is completely orthogonal to the potential risks it can pose.
I would have put money on you falling on that side of the equation.
Everyone on that side of the fence has a statist streak and belief in authority being able to control things for a supposed greater good.
Ad hominem = Mute.