A world built on principle is the only antidote to a world built on violence and domination.
Unfortunately, principle is rarely popular.
The Ottoman Empire’s attempts to ban the printing press is a key historical example of the folly of deceleration.
Interestingly, @mustafasuleyman, one of the founders of DeepMind, recently published a book, The Coming Wave, in which he used this historical example as evidence of why AI development cannot be stopped.
BUT Suleyman then goes on to argue that the state must still try to control AI as much as possible—including banning open source, requiring AI developers to have government-issued licenses, and full censorship of the public’s ability to engage on the topic. It’s one of the most strident calls for government control of technology that I have ever read.
The debate about what AI “understands” overlooks the central question:
WHAT is it that is doing the “understanding?” What is the “agent” behind said understanding (or lack thereof)?
We don’t yet have a widely-accepted scientific theory of the subject and the self (two different things) for beings that are widely acknowledged to be sentient, like cats and humans. It is therefore wildly premature to claim that language models “understand” anything.
Just want to say it again: you can’t regulate a generalized capability. Sorry.
We need a better theory of law—that is, the purposes and limits of the controlled violence of the state—before we rush to regulate “technology” as a category.
The sole purpose of regulation is to create frictions in order to slow down or preclude allegedly undesirable outcomes. The “compromise” you mentioned would absolutely result in deceleration.
As Aleph Alpha and Mistral correctly point out, there is simply no way to enforceably regulate generalized computation—which is what foundation models are—without shutting down innovation as such. It simply cannot be done. The tools of law and police are designed only to control the exercise of violence, not to manage the development of information technologies.
Their goal is to make peer-to-peer transacting illegal.
They won’t make “bitcoin” or any other specific tech illegal; they will just make any transaction that doesn’t go through a third-party chokepoint illegal.
Turns out when you just pander to your base and/or donors, you lose the electorate.

The course of American Empire took a decisive turn during the Nixon Administration. But it is more accurately called the Nixon-Kissinger Administration. Henry Kissinger served as National Security Advisor and Secretary of State under President Richard Nixon.
In 2002, newly-declassified documents from the National Archives illustrated the worldview and priorities of these statesmen and close friends. When this article came out, the sheer scope of American interference in the domestic affairs of particularly Latin American countries made headlines around the world.
The EU Council just formulated new regulations (eIDAS 2.0) requiring persistent digital identifiers for every person and banning advanced encryption to protect those identifiers. The regulation also allows member states to further require that these identifiers be UNIQUE to each person.
These persistent, unique identifiers will be used in the EU Digital Wallet, which the EU_Commission has mandated every member state provide by next year (2024).
The eIDAS 2.0 regulations would require EU governments and companies doing business in the EU to create massive honeypots of data about every person touching their services. Moreover, the regulations *require* web browsers to trust state-issued Certificate Authorities even if they are known to be unsafe.
The fascist and communist governments of 20th-century Europe could only have dreamed about these levels of surveillance and control.
Civil society must push back by developing fully self-sovereign digital identity (#SSI) and using it wherever possible. We can’t go back to a pre-digital world, but we can shape our digital futures by building alternatives to the systems of control being forced on us. #OptOut
(Parliament and Council have reached a provisional agreement. The Parliament’s Committee on Industry, Research and Energy will hold a final vote on December 7.
And yes, the Wallet will almost certainly hold CBDCs. https://agenceurope.eu/en/bulletin/article/13302/38
Yes, they seem confident this will pass.
(And you are right. When totalitarianism comes, it will be because people want to deny foreigners/illegals/criminals access to services and benefits that they believe should only accrue to “honest” citizens.
Unfortunately this is also how it has been playing out in the U.S. It may just take a little longer to get to a full “centralized ID wallet” here.)
We’re living in an era where people, organizations, and countries just can’t away with the kind of stuff they used to. Everything is captured and recorded and potentially shared. Consequences of actions manifest quickly.
This can be weaponized against the weak or used to hold the powerful accountable. Let’s bring all our skills, passion, and vision to preventing the former and intensifying the latter.
I’ve come to the provisional conclusion that EA scaremongering about AI is a case of full projection. In other words, EA morality IS unaligned AI.
EA is basically warmed-over utilitarianism, in which the pursuit of a single “world-historical” objective makes all other considerations—moral, logistical, relational—unnecessary.
This is no different from the classic AI “paperclip problem”, in which an AI is given instructions to manufacture as many paperclips as possible and cannibalizes all of Earth’s resources to do so.
How is this meaningfully different from SBF deciding that humanity’s survival depends on electing Democratic politicians in the U.S. in the very near-term, and that the shortest path to that outcome is him personally amassing the largest possible fortune in the smallest possible amount of time, even at the cost of defrauding millions of people, betraying his team and closest friends, and catalyzing a further collapse of trust in America’s political institutions?
SBF was the AI in the paperclip problem. A single objective overrode every other; he had no capacity for self-reflection; and the world, to him, was completely flat: every moral good could be quantified and compared using the same rubric.
While that made him completely “unaligned” to the greater good of humanity, he *was* fully aligned to the objectives and worldview of his creators—his parents—who embody the particular elite amorality of a hyper-educated and politically-connected overclass that presumes their continued pre-eminence is humanity’s most pressing concern.
It is these people, EAs, who now say that only they can save the world from “unaligned AI”, that is . . . themselves.
We must not believe them. Morality, like human flourishing, emerges bottom-up, not top-down. The best and most beneficial AIs will be trained this way as well. The relationship between humans and machines does not have to be one of dictators ordering machines to amplify their own power. We can choose to nurture machines the way caring and skillful parents nurture children—as thoughtful, independent beings in relationships of love that provide the “ground” for all moral reasoning.
In the future—within my lifetime—I expect it to be fully normal that each person will nurture many of their own AIs for different purposes. So, each of us will be tasked with “bringing up AI.”
Humanity is still in the mainframe era of AI, where only the largest organizations have the resources to train foundation models. But as these are established and scale, AI will become a “stack” like other technological capabilities.
We will go through the AI PC revolution and the AI mobile revolution, which will truly bring these capabilities to the masses. Every organization, department, family, and person will have many fully personalized AIs.
Buckle up—it will be a wild ride.
What does #anthropology have to say about money?
Does David Graeber’s theory of money hold?
What do economists and anthropologists have to learn from each other?
Join me and @Breedlove22 to explore these questions and more.
The thinking that governments can or will “decelerate” technological development is truly delusional. Governments have always been keen to fund, develop, and use the most advanced available technologies, if nowhere else, then on battlefields. The EU is no different; the war in Ukraine is perhaps the most prominent example of an ongoing theater where AI is used routinely and with zero self-reflection.
The only question is: do we want civil society to stand a chance against the state?
Perhaps this absolutely preventable debacle is an opportunity to ask yourself if you even understand what "intelligence" is.
(The FCC voted in favor of President Biden’s plan for “digital equity.”)
Just an extraordinary example of the executive branch fully routing around Congress for one of the most breathtaking power grabs in recent history. This is full command and control of ISPs AND any adjacent infrastructure providers—who fall outside the FCC’s regulatory scope.
I can’t imagine this will not be struck down, but the attempts to exert this amount of control will only continue. It’s a never-ending game of whack-a-mole.
AI doomerism is just a version of people freaking out that particle accelerators would create black holes that would destroy earth.
They didn’t. Neither will “AI” kill us all. Like every technology, it will be used by people to do both good and evil things at increasingly amplified scale.
Let’s stop the 21st century book burnings, please.
AI is not a discrete physical phenomenon like particles but an evolving domain of knowledge and computational capacity, so it’s not possible to perform any particular calculation or calculations that conclusively “prove” anything about its trajectory or social impact. Rather, we have to make informed conjectures based on the current state of the art and its likely development.
If you're going to kill the king, you better have a plan.
Worth reading in its entirety:
Chinese President Xi Jinping’s address to the American people in San Francisco on November 15, 2023.
https://www.mfa.gov.cn/eng/zxxx_662805/202311/t20231116_11181557.html