Militaries are and will continue to be the earliest adopters of AI. That includes nation-state militaries, paramilitaries, private military corporations, private security firms, and police (which are increasingly militarized all over the world).

AI military technologies are not more or less moral than human actors would be. What they do is *accelerate* the pace of destruction, as this quote from a former Gospel operative illustrates (article linked below):

“We prepare the targets automatically and work according to a checklist,” a source who previously worked in the target division told +972/Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”

Whatever the political leadership of militaries using this technology say publicly about its “greater precision” and its “minimization of civilian casualties” is a palatable lie that provides political cover for its actual aims. Those aims are revealed by the incentives that operatives on the ground function from: strike as many targets as possible in as little time as possible. If an operative doesn’t hit their implicit or explicit “death quota,” no doubt more “effective” operatives will be found.

The thing that is most likely to rein in the pace at which AI-enhanced weapons are used in warfare would be a robust multilateral treaty. But we don’t live in that world. The leaders of governments around the world are not inclined to do the arduous and unpopular work of forging diplomatic coalitions with, and constraining their power against, “enemy states.” As a result, the pace of AI development and utilization in warfare will continue to hockey-stick.

This is the cost of poor leadership and a lack of political vision: vast destruction, coming soon to a conflict near you.

Reply to this note

Please Login to reply.