Jim Babcock's Mainline Doom Scenario: Human-Level AI Can't Control Its Successor

Published on May 9, 2025 5:20 AM GMTEliezer's AI doom arguments have had me convinced since the ancient days of 2007, back when AGI felt like it was many decades away, and we didn't have an intelligence scaling law (except to the Kurzweilians who considered Moore's Law to be that, and were, in retrospect, arguably correct).Back then, if you'd have asked me to play out a scenario where AI passes a reasonable interpretation of the Turing test, I'd have said there'd probably be less than a year to recursive-self-improvement FOOM and then game over for human values and human future-steering control. But I'd have been wrong.Now that reality has let us survive a few years into the "useful highly-general Turing-Test-passing AI" era, I want to be clear and explicit about how I've updated my mainline AI doom scenario.So I interviewed Jim Babcock (https://www.lesswrong.com/users/jimrandomh?mention=user

https://www.lesswrong.com/posts/nZtN9PW4qBeNKehfs/jim-babcock-s-mainline-doom-scenario-human-level-ai-can-t

Reply to this note

Please Login to reply.

Discussion

No replies yet.