Reading now, but boy this is long. May toss it at an agent to save some time.
I think a couple more scenarios are missing. The current ones pre-suppose the mainstream idea of how the world runs (sovereign countries competing with each other, etc).
I’m more in favor of transnational clans competing with each other with countries simply being their assets.
So the “positive” outcome described may not actually be so positive if AI alignment happens according to the terms and conditions of a clan whose worldview is based, say, on their superiority over others.
nostr:nevent1qqsx2mmlnqlldf37mxgea236gpx9vawnmxeq6ndgdzqqy6yvennw56sppemhxue69uh5qmn0wvhxcmmv458r4c
Discussion
I tried skimming but some interesting details escaped this way, so I returned for the whole thing.
Yeah same. I’m about 70% through, grinding it out. What I’ve read so far sounds plausible, like near-term hard sci-fi.
This seems particularly apt:
“The public conversation is confused and chaotic. Hypesters are doing victory laps. Skeptics are still pointing out the things Agent-3-mini can’t do. Everyone knows something big is happening but no one agrees on what it is.”
Like with bitcoin!
Quite so! I just finished reading through the “Race” ending. As usual, I find that authors are happy to apply superexponentiation to some things but not others.
For example, in the essay, they imagine AI being turned to fuel faster AI research (plausible). And they posit similar booms in medicine and business (plausible).
But they also posit that the AI couldn’t get out in front of pollution? Seems like an easy problem. And none of the research went into morality, ethics, philosophy? That’s where it loses me.
Humanity is “the environment” of AI, in much the same way that nature is “the environment” for humans. It seems logical to me that superintelligent AI would angle for a controlled symbiosis, much as people seek regenerative and sustainable (but controlled) relationships with nature.