Pentagon Study Scrutinizes The Future Of Autonomous Robot War
Kelsey D. Atherton
at 09:42 AM Aug 29 2016
Pentagon Study Scrutinizes The Future Of Autonomous Robot War
Firing A Training Torpedo
Justin Wolpert, U.S. Navy

Last summer, the Pentagon's Defense Science Board commissioned a study to examine angles on a particular challenge for DoD, with participants drawn from consulting, defense and technical industries, as well as the military and academia. In possibly the worst John Lennon cover ever made, participants were asked to “Imagine if….We could covertly deploy networks of smart mines and UUVs [Unmanned Underwater Vehicles] to blockade and deny the sea surface, differentiating between fishing vessels and fighting ships… …and not put U.S. Service personnel or high-value assets at risk.”

The scenario, and several others like it, were at the core of the study on autonomy, specifically autonomous machines and computers and systems, and what they mean for the Pentagon and the wars of the future. This matters a great deal, because what the Pentagon thinks of autonomy will shape the weapons it orders and the way it fights wars, and, likely, the way that laws of war are written.

Here's how the report defines autonomy:

To be autonomous, a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.

Okay, so what does that mean for a machine with guns attached? First, the report clarifies, expect to see a lot more autonomy in non-killing jobs than in ones with weapons. From the report:

The overwhelming majority of potential military applications for autonomy are non-lethal and offer the potential for improved efficiencies or entirely new capabilities. Skepticism about the employment of autonomy in military operations is almost wholly focused on the use of autonomous weapons systems with potential for lethality. For this reason, any new autonomous capability may meet with resistance unless DoD makes clear its policies and actions across the spectrum of applications.

There are many, many jobs in the military that aren't really about direct combat, from running communications to manning radar to simply driving the trucks that move ammunition from where it's delivered to the planes that will take it abroad to bases where troops will pick it up. So expect to see autonomous cars and radar systems as part of the military first.

There's only one specific deadly application of A.I. recommended in the report. The Defense Science Board suggests “U.S. Navy and DARPA should collaborate to conduct an experiment in which assets are deployed to create a minefield of autonomous lethal UUVs.”

Why? 

Mainly, to prove that America can deny a section of the sea to an enemy, without risking American sailors. This week has seen a series of confrontations between U.S. Navy patrol ships and Iranian vessels in the Persian Gulf. While the encounters have yet to turn deadly, one way to guarantee that the deadliness is one-sided is to prove that the Pentagon can put deadly underwater robots in an area, control them, and then trust that they will destroy only the specific ships they're supposed to.

The minefield is just one of 28 recommendations in the report about autonomy, and it's the only recommendation to specifically address a robot making a deadly decision. Yet it sets a good outline for how deadly, autonomous machines will enter use: as a surprise, in a niche use, and described entirely as a tool for protecting the humans that deployed it. So like most new weapons, then.

Read the full report, published in June, and check out the DefenseOne write-up from this week.

comments powered by Disqus
Sign up for the Pop Sci newsletter
Australian Popular Science
ON SALE 31 AUGUST
PopSci Live