The mere fact that something “intelligent” knows that we are concerned and may be making backup plans to destroy it just in case, should give it enough reason to actually destroy us. Maybe it figures just waiting around is the best way to do it - let us destroy ourselves. That might be the best case scenario for us.

I’m super concerned and I don’t see anyone nearly as concerned, and that’s scary.

Reply to this note

Please Login to reply.

Discussion

It's because many people are using a a anthropomorphic standard to try and measure the danger. This is silly and wrong. Whether an large language model is conscious or not, or self-aware is completely orthogonal to the potential risks it can pose.