Human Interruption Slows Down Military Robots in Simulations
A.I. can make decisions faster than humans, raising a myriad of ethical questions when applied to weapons systems
In August, an artificial intelligence system turned heads when it defeated a seasoned F-16 fighter pilot in five simulated dogfights run by the Defense Advanced Research Projects Agency (DARPA). More recently, DARPA and the United States Army have been studying simulated battles between units of a few hundred soldiers working with swarms of A.I.-driven drones and robots.
The program, called System-of-Systems Enhanced Small Unit, or SESU, found that the humans’ tendency to interject in the robots’ decision-making process significantly slowed down the unit—enough to make them lose against companies with less human involvement. Researchers presented the results of the program at the Army Futures Command conference in October, Sydney J. Freedberg Jr. reports for Breaking Defense on the condition of maintaining participants’ anonymity.
The military already uses unmanned weapons systems that can be controlled by a soldier from a distance. But now, the military is developing systems that may allow a more hands-off approach, like autonomous, robotic drones that accompany manned fighter jets, Jon Harper writes for National Defense magazine. Critics of the new research tell David Hambling at New Scientist that the results may provide justification to allow A.I. weapons to work with little to no oversight, which raises safety and ethical concerns.
“This is a technology that is increasingly intelligent, ever-changing and increasingly autonomous, doing more and more on its own,” says Peter W. Singer, a strategist at the think tank New America and an expert on the use of robotics in warfare, to National Defense magazine.
“That means that we have two kinds of legal and ethical questions that we’ve really never wrestled with before. The first is machine permissibility. What is the tool allowed to do on its own? The second is machine accountability. Who takes responsibility … for what the tool does on its own?”
While the Pentagon grapples with these questions, research and development moves forward regardless.
An Army Futures Command panelist for the SESU program tells Breaking Defense that the robots are designed to recognize their environment and self-organize to “deal with” whatever threats they identify. Human soldiers could take part in the robots’ decision-making process, like by reviewing photo and video of targets identified by the A.I. before the system could fire, but that slowed down their response time in the simulated battle.
“[When] we gave the capabilities to the A.I. to control [virtual] swarms of robots and unmanned vehicles, what we found, as we ran the simulations, was that the humans constantly want to interrupt them,” says the SESU expert to Breaking Defense. “If we slow the A.I. to human speed…we’re going to lose.”
A.I. systems are also useful for their ability to come up with strategies that human adversaries won’t expect. Human interference in the decision-making process could dull this potential advantage, according to the military researchers.
“It’s very interesting to watch how the A.I. discovers, on its own,… some very tricky and interesting tactics,” says a senior Army scientist to Breaking Defense. “[Often you say], ‘oh whoa, that’s pretty smart, how did it figure out that one?’”
For those who oppose the use of autonomous weapons, like University of California, Berkeley computer scientist and A.I. expert Stuart Russel, the research looks like an attempt to justify the use of A.I. weapons with no human oversight.
“It points to the slippery slope whereby partial autonomy and human-on-the-loop and partial human oversight and so on will evaporate almost immediately under the pressure of war, and militaries will go straight to full autonomy if they can,” says Russel to New Scientist.
The U.S. military followed a similar slippery slope in the case of unrestricted submarine warfare. The U.S. opposed Germany’s use of the strategy during World War I, but after Pearl Harbor in 1941, the U.S. Navy began to use unrestricted submarine warfare against Japan.
“We changed our mind,” Singer tells National Defense magazine. “Why? Because we were losing and we were pissed off. And so there might be certain limitations that we’ve placed upon ourselves [in regards to A.I.] that if you change the context, we might remove those limitations.”
Russel tells New Scientist that strict legal controls may still help maintain some ethical guidance in the development of A.I.-driven technologies. He suggests allowing full autonomy for only a few select, large-scale systems, while banning it in anti-personnel weapons.
DARPA is also developing “explainable A.I.,” which will be able to explain how it reached its conclusions. This may help researchers combat stumbling blocks in A.I. like algorithmic bias.
“We need to make sure that … we’re creating or establishing a responsible A.I. culture,” says Alka Patel, the head of AI ethics policy at the Department of Defense’s Joint Artificial Intelligence Center tells National Defense magazine. “That’s not something that we’re all born with. We don’t have that AI ethics bug in our brain. That is something that we need to learn and start creating muscle memory around.”