by Kris Osborn, President, Center for Military Modernization
The US Air Force is preparing to fight AI-enabled enemy robots due to the rapid growth and implementation of artificial intelligence by US rival nations, yet service leadership remains intensely focused on leveraging effective, “ethical” applications of AI.
In recent remarks at the Reagan Defense Forum in Washington, DC., Air Force Secretary Frank Kendall was clear that the US Air Force is moving quickly to harness the paradigm-changing impacts of AI and its impact upon weapons systems, command and control, data processing, targeting, threat identification and networking. At the same time, Kendall was clear that more needs to be understood and solidified regarding the integration of AI in certain areas and that ethical parameters would be heavily factored in any decisions regarding the application of weapons, autonomy and AI-enabled systems. However, Kendall also said that rival nations such as China are rapidly closing the margin of superiority now present in US AI-systems, and potential adversaries may not be inclined to consider any kind of ethical restraints on the use of AI-enabled autonomous force.
“I care a lot about civil society and the law of armed conflict,” Kendall said, as quoted in an Air Force essay. “Our policies are written around those laws. You don’t enforce laws against machines, you enforce them against people. Our challenge is not to limit what we can do with AI but to find how to hold people accountable for what the AI does. The way we should approach it is to figure out how to apply the laws of armed conflict to the applications of AI. Who do we hold responsible for the performance of that AI and what do we require institutions to do before we field these kinds of capabilities and use them operationally.”
For years, Pentagon doctrine has mandated that any decision regarding the use of lethal force must be made by a human, yet cutting edge weapons developers emphasize that high-speed AI-enabled data processing and targeting can massively improve the speed and efficiency of human decision-making.
“Our job on the government side more than anything else is to thoroughly understand this technology, have the expertise we need to really get into the details of it and appreciate how it really works,” Kendall said. “To be creative about helping industry find new applications for that technology and developing ways to evaluate it get the confidence we’re going to need to ensure that it can be used ethically and reliably when it is in the hands of our warfighters.”
AI – War at the Speed of Relevance
The ability for a fully autonomous system to track, identify and destroy a target with no human intervention is essentially here, yet such operations are restricted by US and Pentagon leaders who explain the speed and benefits of AI can still massively benefit commanders yet while still ensuring lethal force attacks are decided by human decision-makers. Nonetheless, while this doctrine can still be upheld, Kendall also indicated that “not using” AI can quickly translate into losing in combat. Processing speeds and certain kinds of data organization and analysis are “exponentially” faster when enabled by AI.
“The critical parameter on the battlefield is time,” Kendall said. “The AI will be able to do much more complicated things much more accurately and much faster than human beings can. If the human is in the loop, you will lose. You can have human supervision and watch over what the AI is doing, but if you try to intervene you are going to lose. The difference in how long it takes a person to do something and how long it takes the AI to do something is the key difference.”
Warrior Video – Intv. With Commanding Gen. of AFRL
Warrior Intv. Former AFRL Commander (ret) Maj. Gen. Heather Pringle
Kendall’s reference to time seems completely accurate and precise with regard to what all the military services are now doing with “sensor” to “shooter” time. Air Force leaders such as former US Air Force Europe Commander Gen. Jeffrey Harrigian talked about enabling critical decision making at the edge of combat and leveraging the merits of AI to fight at the “speed of relevance.” Operations such as organizing sensor data, finding and verifying targets of high value and locking in certain elements of targeting can all be massively improved by AI.
Given that humans cannot replicate the analytical and processing speed of AI-capable computers, and AI-empowered systems cannot replicate the many more subjective, yet impactful nuances of human cognition, any optimal path forward seems to suggest the integration or combination of both.
“I do believe the future is going to be about human-machine teaming,” Air Force Chief of Staff Gen. David Allvin said. “Optimizing the performance and being able to operate at speed. That investment in our collaborative combat aircraft program is what is going to get us there.”
Allvin explained that the Air Force’s rapid development of wingman drones for 6th-gen fighter jets called Collaborative Combat Aircraft represent the cutting edge of manned-unmanned teaming.
Similar things are developing within the Army, as the service’s “Project Convergence” leverages AI-enabled computing to “truncate” the sensor to shooter curve from 20-minutes down to a matter of seconds. An AI-empowered system, for example, can distill massive amounts of incoming sensor data, bounce it off a vast database and draw quick conclusions in terms of recommending an optimal “shooter” from time-sensitive sensor-data. The process of analyzing a host of different variables in relation to one another and comparing specific circumstances to historical precedent can take place in a matter of seconds or milliseconds when performed by an AI-capable system. All of these procedural and analytical functions can take place without needing to decide about the use of lethal force. Essentially, all of the analysis, interpretive procedures and data processing can be done by computers almost instantly, leaving the final decision about lethal force to a human being.
The Army’s MULE Autonomous Robot
As far back as 2009, for instance, the US Army made sure to solidify its doctrinal approach upon developing the Multi-Utility Logistics Equipment vehicle, a javelin anti-tank missile-armed robot powered by what was at the time called Autonomous Navigation Systems (ANS). Even as far back as 2009, the technology for a robotic system, armed with anti-tank missiles, to track and destroy an enemy target was fast-approaching if not here. As a result, Army leaders made a point to rework and solidify doctrine to ensure the “human-in-the-loop” requirement was fully reinforced. While the MULE was ultimately canceled, technological progress further enabling autonomy has progressed at lightning speeds to where it can now perform a much greater range of functions at much faster speeds with much more analysis as well.
“Out-of-the-Loop” AI for Defense?
As part of this, the increasing accuracy, precision, volume and speed of AI-enabled analysis have led some Pentagon weapons developers to explore the application of “out-of-the-loop” autonomy. What this means is that, for example, AI-enabled weapons might work well and save lives when used for purely “defensive” purposes. Perhaps incoming mortars, rockets, artillery or drone attacks could be recognized, identified and destroyed in a matter of seconds without there be any use of lethal force? Perhaps interceptors could be fired and incoming threats could be destroyed
instantly using AI and defensive weapons? While such a prospect does introduce complexities and of course concerns about being inaccurate, the possibility is something which could arguably save lives and make a difference in war. The use of “defensive” AI-enabled weapons, if properly secured, aligns with Kendall’s key message that “not” using AI simply means a war will be lost.
Kris Osborn is President of Warrior Maven – Center for Military Modernization. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.