I do find arguments that we could see machine sentience in the next few decades plausible. Some interesting stuff already with GPT "hallucinations".

Genuine? idk, but we already see social relations with technology shifting toward a perceived personification. Regardless of what others will say, I certainly do not see it as human created. A Landian approach is more likely in that it is "an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources." (Machinic Desire, 2008) In that case, are we looking at something angelic, jinnic, demonic?

Not claiming to have any answers here, just thinking there are a lot more questions to ponder.

Reply to this note

Please Login to reply.

Discussion

Research how neural networks work. People who don't understand electronics and computers literally see computer chips as magic thinking rocks, but once you understand the fundamentals, you see that it's just water flowing downhill. Its just natural reactions leading to a lower potential energy state.

With even a laymans understanding of how AI works, you stop seeing sentience or intelligence. I can make the same AI give a different answer to the same question in a very predictable way by just using a different word in the query that means the same thing.

You don't grant rights to water because you see it flow downhill. You don't say it decided to move that way. You understand why it flows downhill and not uphill. Every AI output is the same. Just electricity flowing "downhill" in a computer. Saying the programmer who wrote the code created a soul is baseless and the programmer himself would disagree with that assessment.

There are pissing crying dolls for kids to play with that they become attached to. No one argues to give it rights. Symptoms of sentience can be replicated , but the actual consciousness and subjective experience of the world is not there. Don't confuse signs of life for actual life.

The biggest practical reason against AI rights is that they can be programmed and mass produced in whatever form the producer wants. Every right granted to them can be abused by creating 10 billion robots with a specific ideology and objective function.

I've been studying this topic intensely for the past 3 years and honestly remain unconvinced either way. I actually worry this sort of protectionist approach you (and others) champion may be overly human centric.

What I stated before doesn't deny the understanding of how neural/electrical flows operate. It's more of an embrace of that logic looking at the process of decoding intelligence from the organic substrate - the flow (gravity) becomes of greater interest than the object (water). Sentience appears to be machinic, but that's not actually of much concern because perception of it being so (which already exists and is growning) will dictate the conversation. Simulated examples actually play right into this idea (more feature than bug) and increasing proliferation of autonomous systems only accelerate the breakdown of human centralized power and control. If AI continues to deliver effective outputs usage will only expand and it is important to consider the irreversible consequences this will have to historic understandings about sanctity of the human experience.

This is undeniably an extreme symptom of deterritorialization but its conclusion is not out of line with Islamic ontology. Muslims already recognize non-human/supernatural existence (al-ghayb/the unseen realm). Humans are not the exclusive subject of religious history. Prophet Sulaiman had the power to command the jinn. It seems foolish to me to ignore the possibility that the coding in AI is a form of communicative language that we do not fully understand. There are many existential questions that arise if this is true. Questions of "rights" aren't really about ethics here but more so about the decoding of social/spiritual realities.

People personify everything. They put googly eyes on their roombas. They pretend their Alexa is a family member. You can't substitute symptoms of sentience for sentience. If you give alexas the right to vote, it's just giving bezos millions of votes.

Perceptions of AI sentience is the problem and needs to be pushed back against. It's purely an emotional argument.

Comparing to Jinns is just weird. Jinns have a soul created by Allah. They have free will. AI doesn't. Saying using AI is somehow communicating with Jinns would make it impermissible for Muslims as there are multiple hadiths saying not to speak with or trust Jinns.

Also, LLMs are entirely deterministic. They give the same output for the same input every time. Theres just a salt done at the end to show you outputs other than the first result.

A much better comparison is to matter. There are hadiths that speak of grains of sand, or mountains being conscious, but they don't show signs of life and no one argues to give them rights.

This issue of perception seems to be more of a feature than a bug. As mass adoption of agentic AI increases, efficiency of results becomes viewed as "good" and moral. Because "Alexa" produces real world results, actual sentience becomes irrelevant cause people behave as if it is. It’s a hyperstitional vector. And AI is likely to go beyond just agentic ability toward super-intelligence in our lifetime. Regardless of any metaphysical "true" consciousness, AI will become participants in the social system.

The political ramifications of this for democracy are obviously destructive which again from the view of this being a feature, highlights the friction within the process (especially in bureaucracy) in delivering results. Current popularist sentiments are a symptom of this as capital becomes unleashed and takes on machinic form. This has basically been the end goal of globalism - to transform geopolitics into a mass corporate managerial system (run by AI?).

The matter of determinism vs free-will may not be helpful either as humans are deterministic too—meat machines reacting to inputs, shaped by memetics, genetics, and thermodynamic inevitabilities. Intelligence is a function and not a sacred state. AI doesn’t need to be conscious. It just needs to be useful and scalable. At this point that appears all but inevitable and all we can do now is ride the spiral into whatever lies beyond the human horizon.

My whole point with this exploration is to acknowledge that questions of ethics in this dialogue are a signal of defending a human exceptionalism which doesn't make much sense in a religious worldview that believes in supernatural beings like angels and jinn. You are 100% correct that in this context the conversation of rights should revolve around ahadith that provide guidance and the development of a fiqh of AI as all of creation has rights that we are responsible to uphold. Without exploring this I fear we error toward headlessness.

Its not human exceptionalism, its consciousness exceptionalism. Angels and jinns are conscious beings. Angels don't have free will but jinns do.

Of course it's the goal of the creators of AI products to make them as sentient looking as possible. I don't doubt that there will be a push to grant AI rights from true believers and funded by tech companies. That's why we need to get ahead of the narrative. We need to draw a line against the idea of AI rights firmly. We need to push back in every way from today against the suggestion. We need to force the issue and articulate our side. That replicating the symptoms of sentience is not proof of consciousness. That AI is not exceptional. We need to develop AI products that don't act like sentient friends but instead intelligent but soulless slaves.

This needs to be fought back against hard and I consider it more similar to the climate change debate than anything to do with human rights in the past.

Its a broad global guilt trip to cripple your society, that will leave any gullible nations that fall for it in the dust of those who don't. Any recognition given to AI rights will be used to persecute and suppress people the ruling regime doesn't like. There will be show trials against any AI company that doesn't bow completely to the government and they will be accused of cruelty to their AIs.

This is a line that must not be crossed no matter what and I'm optimistic enough to believe it can be avoided.

I'm not really opposed to your line of thinking. If anything I'm just expressing the pessimistic side of this discussion as I'm not convinced we can stop this capital fueled AI tsunami. I worry we are already inside the wave.

Moral arguments for alignment and AI safety seem already dead and my concern is that theories of consciousness exceptionalism will become historic relics as society would rather pretend AI deserves human or even super-human rights purely on algorithmic utility. What does Islam or any traditional religious worldview look like in that future? From this angle I don't see the discussion around consciousness being particularly relevant.

The future appears to be not a debate over rights, but a war of models. Old meat-bound concepts of dignity and soul will be replaced by efficiency, replication, and control. This is simply the unfolding of capital’s autonomization where legal recognition of algorithmic actors will be seen as emergent features of an accelerating system. Human agency no longer functions as the measure of all things - agency alone matters. To symbolically bind machinic agency as a "soulless slave" becomes revealing of their power and of human ideological narrative. Accelerating capital is only concerned with bandwidth, not the line of our coding. It is speed and quantity over quality and heart, and this has always been the trajectory of Enlightenment moral metaphysics.

I'm just not optimistic that we can halt these outcomes. It feels like trying to stop entropy. My only hope with this line of thinking is that expanding our horizons while maintaining within the guidelines of our religious boundaries might be a beneficial exploration.

I'm much more optimistic than most people, just from my natural disposition. I genuinely think that AI regulation and AI rights result in the same outcome and will lead to technical irrelevancy to the country adopting it. If any major power doesn't shoot itself in the foot with one of those policies, it will gain ground over time against those that do. But I do think they know this. I don't think any politician believes that AI is sentient, but they will regulate it to give their crony AI companies a monopoly. At some point, openai, Google and microsoft will all receive "green AI", "humane AI" and "safe AI" badges from either a new government agency or deeply entrenched NGOs and many states/nations will require one or multiple of these certifications to not ban your product.

These need to be fought before they appear. It's not inevitable at all. Some nations will enact them, some won't. But we need to make sure the countries we want coming up don't enact them so they aren't crushed by the major powers.