I worry that the era of constant, rolling, large scale grifts is upon us, and it’s a function of attention versus complexity, when our attention is increasingly at a premium. With a bunch of researchers around the world — irresponsibly in my opinion — racing towards artificial general intelligence, without the faintest clue of how we solve the alignment problem, I’m worried this problem gets parabolically worse.

I want to be more optimistic on this stuff. But the more I try to work it out in my head, the more worried I become that we’re up against a serious set of collective action problems that social trust has decayed too much, and geopolitical fractures have made impossible to address.

If I have one “doomerish” set of views, these are the closest to them.

Reply to this note

Please Login to reply.

Discussion

The alignment problem is super doomy. As long as we VCs with billions of dollars believing it is gods work to live out Friedmans doctrine of shareholder value and we have no ability to tell them “no” doom will be not far on the horizon.

💯