I'm not really opposed to your line of thinking. If anything I'm just expressing the pessimistic side of this discussion as I'm not convinced we can stop this capital fueled AI tsunami. I worry we are already inside the wave.

Moral arguments for alignment and AI safety seem already dead and my concern is that theories of consciousness exceptionalism will become historic relics as society would rather pretend AI deserves human or even super-human rights purely on algorithmic utility. What does Islam or any traditional religious worldview look like in that future? From this angle I don't see the discussion around consciousness being particularly relevant.

The future appears to be not a debate over rights, but a war of models. Old meat-bound concepts of dignity and soul will be replaced by efficiency, replication, and control. This is simply the unfolding of capital’s autonomization where legal recognition of algorithmic actors will be seen as emergent features of an accelerating system. Human agency no longer functions as the measure of all things - agency alone matters. To symbolically bind machinic agency as a "soulless slave" becomes revealing of their power and of human ideological narrative. Accelerating capital is only concerned with bandwidth, not the line of our coding. It is speed and quantity over quality and heart, and this has always been the trajectory of Enlightenment moral metaphysics.

I'm just not optimistic that we can halt these outcomes. It feels like trying to stop entropy. My only hope with this line of thinking is that expanding our horizons while maintaining within the guidelines of our religious boundaries might be a beneficial exploration.

Reply to this note

Please Login to reply.

Discussion

I'm much more optimistic than most people, just from my natural disposition. I genuinely think that AI regulation and AI rights result in the same outcome and will lead to technical irrelevancy to the country adopting it. If any major power doesn't shoot itself in the foot with one of those policies, it will gain ground over time against those that do. But I do think they know this. I don't think any politician believes that AI is sentient, but they will regulate it to give their crony AI companies a monopoly. At some point, openai, Google and microsoft will all receive "green AI", "humane AI" and "safe AI" badges from either a new government agency or deeply entrenched NGOs and many states/nations will require one or multiple of these certifications to not ban your product.

These need to be fought before they appear. It's not inevitable at all. Some nations will enact them, some won't. But we need to make sure the countries we want coming up don't enact them so they aren't crushed by the major powers.