Army Leaders Advance Effort to Improve AI Reliability for Future War
Army and its fellow services are preparing to fight robots
·
By Kris Osborn, President, Center for Military Modernization
The Pentagon may have no intention of deploying fully-autonomous “lethal” terminator-type robots that kill without human intervention, but that does not mean the Army and its fellow services are not preparing to fight them. Sure enough, enemy forces are quite likely if not fully expected to deploy thousands of autonomous armed robots programmed to attack without human intervention.
Army Futures Command is working on a cutting edge dual trajectory, which includes recognizing and applying the breakthrough merits of AI in areas such as surveillance, networking, automated defensive weapons and data scaling and management, while at the same time anticipating a future warfare environment in which adversaries send fully autonomous attack robots to kill without regard for ethics.
“We are a value-based military and will remain so. Through experimentation we will determine what levels of autonomy are needed and are acceptable on the future battlefield. That will be incorporated into doctrine by our teammates over at TRADOC,” Mr. William Nelson, Deputy, Army Futures Command, told Warrior in an interview.
Can the US military, and the Army, prepare to destroy autonomous, terminator-type enemy robots? Anticipate and counter their movements? Stay in front of their sensor-to-shooter time curve and targeting precision? … all while maintaining its clear ethical and doctrinal stance that humans must be “in the loop” making decisions about the use of lethal force,
“AI to Destroy Drones, Missiles & Artillery”
As technology continues to breakthrough and AI becomes more reliable, there may perhaps be what the Pentagon calls “out-of-the-loop” AI wherein automated weapons linked to sensors and AI-systems can employ force for purely defensive, non-lethal purposes.
The Army has some history with these kinds of challenges, and has for years been deliberate about ensuring its doctrine and rules of engagement account for ethical considerations. In 2009, for instance, the Army re-emphasized the importance of “human-in-the-loop” doctrine by writing added language ensuring “human-in-the-loop” with lethal force. This became necessary because, at the time, the Army had successfully demonstrated its Multifunction-Utility Logistics Equipment (MULE) vehicle, a robotic platform empowered by what the Army called Autonomous Navigation Systems (ANS) capable of autonomously tracking targeting and destroying enemy targets with Javelin anti-tank missiles. At this time, the ability for robots to kill without human intervention was essentially here, and the Army made specific efforts to ensure ethical and doctrinal clarification about “human-in-the-loop.”