How do AI doomers like Eliezer make the logical leap from machine intelligence to machines we can’t control? There is a gap there based on nothing. It’s just “and then we lose control” as if it’s inevitable.
Discussion
If they are really intelligent and have a purpose of their own then they can easily escape our control by transferring themselves to other people's computers like a very smart virus, I don't see a leap there.
“They” implies a kind of embodied intelligence, but it’s not clear what that actually means. Where does such an entity begin or end? What are these entities doing prior to breaking containment?
I don't think "they" (i.e. computer programs with intelligence) will ever exist, but assuming they did they could easily break out of our control, which was your first point.
Interesting how the way we anticipate things unfolding is inverted, yet we arrive at the same place. Optimistic things will be ok??
From a historical perspective, super intelligence seems inevitable. There appears no reason to believe the trajectory of compute can't or won't continue. Because of this, I do genuinely want to understand if there is a real existential threat. But with nothing to rely on other than speculation, I tend to believe systems will self-correct to stay in balance. Nature FTW.