by Kris Osborn- Warrior Maven
(Washington, D.C.) The Pentagon is expressing concern about China and Russia’s use of artificial intelligence to control autonomous attack systems, potentially using new technology to remove human’s from the kill-chain decision-making process.
There has been a longstanding concern among Pentagon leaders that, despite the current U.S. ethical guidelines requiring that a human be “in-the-loop” regarding decisions about the use of lethal force, countries like Russia and China will not operate within similar ethical guidelines.
Speaking recently at the Defense Department’s AI Symposium, Defense Secretary Mark Esper cited both Russia and China as countries now presenting a high-level of threats to the U.S. regarding their use of AI and autonomous systems.
“Moscow has announced the development of AI-enabled autonomous systems across ground vehicles, aircraft, nuclear submarines and command and control. We expect them to deploy these capabilities in future combat zones,” Esper said, according to a Pentagon transcript.
Referring to Russia’s integrated use of weapons systems such as drones, artillery and cyberattacks during the 2014 annexation of Ukraine, Esper explained that Russia used this kind of advanced networking to “inflict severe damage on Ukrainian forces.”
Esper also cited Chinese AI-empowered weapons systems such as long-range drones and autonomous ground vehicles to “counter America’s conventional power projection.”
“Chinese weapons manufacturers are selling autonomous drones they claim can conduct lethal targeted strikes,” he mentioned.
These developments, however, including Putin’s well known remark that “whoever controls AI will control the world,” do not necessarily mean China and Russia are measurably ahead of the United States in the area of AI.
Interestingly, going as far back as 2009, autonomous navigation systems have increasingly enabled robotic weapons systems to detect, track and destroy targets without needing human intervention. The technology has been developing for many years. For instance, the U.S. Army’s now-canceled Multi-Utility Logistics Equipment Vehicle was a previous platform engineered to use autonomous navigation and Javelin anti-tank missiles. The possibility for a robot to find and attack enemy targets with Javelin missiles independently was nearly here as far back as ten years ago. At the time, the Army added additional doctrinal clarification again reinforcing the need for humans to ultimately make all decisions regarding the use of lethal force. In years since, the Pentagon has upheld this doctrine.
AI-enabled autonomy is now in the early phases of being used for defensive purposes against non-human targets in areas such as drone defense and countermeasures for incoming enemy rockets, artillery and mortars. This kind of application can quickly improve force protection technologies by enabling much faster decisions regarding which threats to stop and which interceptors or countermeasures to use.
Kris Osborn is Defense Editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics& Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.