Air Force Secretary Kendall Says “Not” Using AI Will “Lose Wars”
The US Air Force is preparing to fight AI-enabled enemy robots due to the rapid growth and implementation of artificial intelligence by US rival nations
·
by Kris Osborn, President, Center for Military Modernization
The US Air Force is preparing to fight AI-enabled enemy robots due to the rapid growth and implementation of artificial intelligence by US rival nations, yet service leadership remains intensely focused on leveraging effective, “ethical” applications of AI.
In recent remarks at the Reagan Defense Forum in Washington, DC., Air Force Secretary Frank Kendall was clear that the US Air Force is moving quickly to harness the paradigm-changing impacts of AI and its impact upon weapons systems, command and control, data processing, targeting, threat identification and networking. At the same time, Kendall was clear that more needs to be understood and solidified regarding the integration of AI in certain areas and that ethical parameters would be heavily factored in any decisions regarding the application of weapons, autonomy and AI-enabled systems. However, Kendall also said that rival nations such as China are rapidly closing the margin of superiority now present in US AI-systems, and potential adversaries may not be inclined to consider any kind of ethical restraints on the use of AI-enabled autonomous force.
“I care a lot about civil society and the law of armed conflict,” Kendall said, as quoted in an Air Force essay. “Our policies are written around those laws. You don’t enforce laws against machines, you enforce them against people. Our challenge is not to limit what we can do with AI but to find how to hold people accountable for what the AI does. The way we should approach it is to figure out how to apply the laws of armed conflict to the applications of AI. Who do we hold responsible for the performance of that AI and what do we require institutions to do before we field these kinds of capabilities and use them operationally.”
For years, Pentagon doctrine has mandated that any decision regarding the use of lethal force must be made by a human, yet cutting edge weapons developers emphasize that high-speed AI-enabled data processing and targeting can massively improve the speed and efficiency of human decision-making.
“Our job on the government side more than anything else is to thoroughly understand this technology, have the expertise we need to really get into the details of it and appreciate how it really works,” Kendall said. “To be creative about helping industry find new applications for that technology and developing ways to evaluate it get the confidence we’re going to need to ensure that it can be used ethically and reliably when it is in the hands of our warfighters.”
AI – War at the Speed of Relevance
The ability for a fully autonomous system to track, identify and destroy a target with no human intervention is essentially here, yet such operations are restricted by US and Pentagon leaders who explain the speed and benefits of AI can still massively benefit commanders yet while still ensuring lethal force attacks are decided by human decision-makers. Nonetheless, while this doctrine can still be upheld, Kendall also indicated that “not using” AI can quickly translate into losing in combat. Processing speeds and certain kinds of data organization and analysis are “exponentially” faster when enabled by AI.