I worry that the era of constant, rolling, large scale grifts is upon us, and it’s a function of attention versus complexity, when our attention is increasingly at a premium. With a bunch of researchers around the world — irresponsibly in my opinion — racing towards artificial general intelligence, without the faintest clue of how we solve the alignment problem, I’m worried this problem gets parabolically worse.
I want to be more optimistic on this stuff. But the more I try to work it out in my head, the more worried I become that we’re up against a serious set of collective action problems that social trust has decayed too much, and geopolitical fractures have made impossible to address.
If I have one “doomerish” set of views, these are the closest to them.