Great podcast, thanks Danny. One aspect of AI risk I still don't understand: if AI becomes extremely intelligent in a variety of ways (AI-->AGI-->super intelligence), why does it follow that it would ever come up with its own goals that weren't given to it by humans? Why would an AI that's good at every board game (more general) be more likely to come up with its own goals than an AI that's just good at chess (more narrow)? Could be something I'm missing but I don't see how becoming more generalized makes AI more likely to have its own agency.
https://fountain.fm/episode/Me1OOAdBf35vEp8YBA5a
nostr:nevent1qvzqqqpxquqzqlhk0sv22jf0snvadj3n87senx0kzts6qvn7av0qv877r2qp9xpx8tf29s