By Kris Osborn, President, Center for Military Modernization
How soon will the US need to be prepared to fight armies of autonomous robots? The answer, while unclear in some respects, may be “pretty soon.” As cliche as “terminator” types of comparisons circulate within analysis of robotics, autonomy and AI, there is something quite “real” about this possibility to a degree.
The consequences are serious, because while the Pentagon and US services heavily emphasize “ethics” with AI and the need to ensure a “human in the loop” regarding the use of lethal force, there is little to no assurance that potential adversaries will embrace a similar approach. This introduces potentially unprecedented dangers not lost on the Pentagon, a reason why there are so many current efforts to optimize the use of AI, autonomy and machine learning when it comes to combat operations and weapons development.
We have all heard much discussion about the term “zero trust” referring to critical, high-priority efforts to “secure” AI and make it more reliable. Certainly the merits of AI are well established, however a clear and often addressed challenge or paradox continues to take center stage; an AI system can only be as effective as its database, so what happens when an advanced AI-enabled system encounters something it has not “seen” or processed before? This certainly presents the prospect of a “false positive” or incorrect analysis to some extent, generating the kind of concern now functioning as an “impetus” for zero trust and the need to make AI more reliable.
Much progress has already taken place in this respect, as advanced algorithms can now discern meaning and accurately analyze seemingly unknown ideas and objects by analyzing “context” and surrounding indicators. For instance, algorithms are increasingly able to distinguish the difference between foot “ball” and dance “ball” by assessing the surrounding words to accurately assess the conceptual framework. AI-enabled “analytics” continues to happen faster and faster, with the goal of engineering a system which can perform near “real-time” analytics and make reliable and accurate determinations of both recognized or seemingly unrecognized indicators.
Army “Defend AI”
The Army has been working intensely on this, and while making progress, surging into the next steps associated with securing AI …. “Defending” AI from attacks and countermeasures.” As the first service planning an actual AI program of record in its 5-year budget planning (POM), the Army has launched a “100-day” AI-risk assessment pilot to advance efforts to secure, harden and improve AI-enabled systems and the promise they bring.
“Subsequent to the 100-day implementation plan will be a 500-day implementation plan. We set the conditions with the 100-day plan and with the 500-day effort we are going to operationalize that. We are the first service that has POM’d (program objective memorandum) a program of record for Artificial Intelligence,”. Mr. Young Bang, Principal Deputy, Assistant Secretary of the Army – Acquisition, Logistics & Technology, told Warrior in an interview.
The thrust of this multi-layered effort, Bang explained to Warrior, involves a number of key variables such as efforts aimed at inspiring industry to develop algorithms which can be integrated into an Army and DoD system in support of a secure system and “layered” defense network. The rapidly emerging name for this massive Army effort is called “Defend AI.”
“How do we help industry adopt third party algorithms faster – if we develop algorithms or if we ask industry to come into our trusted and secure environment with our data sets, there is a process for that. One of the things we realize is that industry is going to be far better than the Army or DoD at developing amazing algorithms. We want to adopt that faster. With this, we can make a risk-based decision to put it on a closed system that we automate, so that we can actually identify our known risk…. and just like everything else on the commercial side or within the government, make a risk based decision to put it on to a closed system, put it on to a business system, put it into a weapon system,” Bang said.
AI & Human-Machine Integration
Bang explained that these efforts align with a clear conceptual foundation emphasizing the importance of both human and machine capabilities. Not only is there of course the Pentagon doctrinal requirement that humans be “in the loop” regarding lethal force, but Army technology developers recognize there are elements of advanced AI-enable computing which simply cannot replicate critical, yet more subjective, phenomena fundamental to human consciousness. What about ethics? Intuition? Morality? Consciousness? Emotion? These are just a few of the many distinctly “human” attributes central to decision-making and therefore critical to combat operations and decision-making. Can mathematically-generated algorithms replicate these things?
These kinds of questions and variables likely informed Army Secretary Christine Wormuth’s recent Human-Machine Interface directive designed to identify and optimize the requisite balance between high-speed machines and human decision-making.
“Pursuing artificial intelligence capabilities ethically is a core part of the Department [of Defense’s] policy,” Wormuth replied, according to an Army essay stating how Wormuth said “the integration of human input remains a core principle… and maintaining a human focus amid the push to rapidly employ artificial intelligence systems will continue to be a primary objective.”
The Army is fast advancing the curve when it comes to leveraging the life-saving, combat intensifying benefits of properly applied AI, as the service recently demonstrated a breakthrough using AI-and Autonomy to safely and quickly maneuver weapons into attack position and prepare for rapid fire. The Autonomous Multi-Domain Launcher, as it’s called by Army weapons developers, demonstrated an ability to deliver precision-fires using Human-Machine teaming. An Army essay on the AML live fire said the “exercise showcased the launcher’s capability to move independently from a hidden position to a firing point, adjust its direction as instructed, and receive fire control commands from a remote gunner.”
Human In the Loop
One of the key Bang’s approaches supports this effort and, while citing the unprecedented and seemingly limitless merits of AI and the combat advantages it brings, he was clear to emphasize the need for “human” command and control in many areas able to address the “art” or less calculable elements of combat. Part of this effort, Bang explained, involves organizing and “scaling” or layering the data for human decision-makers to ensure highly-efficient, optimal decision-making.
“We have what we call in leadership is an Art and Science….and for us, right, we wanna enable a lot of the science to really accelerate that speed, whether it’s the data, the visioning, the fusion of data..So we could get insights to enable the leader or the commander to make decisions based on military experience. A dimension of processing everything the data, but the leaders don’t need all the data, they just need some insights to really help them have data points so they can make judgments and decisions to really the art of the war,” Bang said. Bang stressed that this balance is critical to questions of “life and death” in combat.
“How do we get algorithms in there that will enable us to do things much faster, efficiently, right, and give our soldiers a more fighters, more bandwidth so they’
re not doing menial tasks so they could actually do higher performing tasks,” he said.
Defending AI from Attacks
Securing an AI-empowered system’s ability to analyze, discern, organize and efficiently transmit time-sensitive combat intelligence is a number one priority, yet the Army has also started thinking about “Defending AI” against attacks, spoofing and other high-tech efforts to “confuse” or derail AI … perhaps creating a false positive or catastrophic error. With this in mind, Bang explained how the AI-risk program is also exploring ways to develop “countermeasures” able to identify and stop attacks or “spoofing” attempts on an AI system. Perhaps reliability can be advanced by efforts to analyze a number of variables and indicators in relations to one another. For instance, when it comes to identifying an object, perhaps there are acoustic or thermal signatures associated with an enemy target alongside electro-optical or visual indicators? These are the kinds of things likely informing cutting edge efforts to “Defend” AI systems from enemy attack.
“This is a lot harder concept to get to, but we are pushing the envelope around AI, counter AI and counter to counter AI,” Bang said.
Out of the Loop AI
Bang also addressed emerging questions coming from Pentagon weapons developers introducing the prospect of purely “autonomous” or “out of the loop” AI for purely defensive, non-lethal purposes such as drone and cruise missile defense. Should AI be able to instantly determine that a given threat or object involves no humans and can be defeated or “intercepted” with non-lethal force, is there a place for high-speed processing and AI-enabled defenses against incoming missiles, mortars, rockets, unmanned systems or drone swarm attacks? Can AI be trusted to make these determinations accurately? What if there is a false or incorrect identification or false negative? This is why Bang and others are fast-tracking AI to leverage and optimize its merits, while simultaneously prioritizing ethics and “cautioning” against “unintended uses.” Out of the loop AI could perhaps, someday emerge for some specific conditions in the future for purely non-lethal and defensive purposes as a high-speed way of saving lives under rapid enemy attack. Certainly the merits of AI have yet to be fully uncovered, yet Bang was clear to prioritize ethical, doctrinal and security-related variables surrounding the application of AI. This is, in fact, likely part of the entire concept for the “Defend AI” concept, as the effort seeks to accelerate the fast-returning benefits of AI, while prioritizing ethics and initiatives aimed at “securing” AI to make it more reliable and resilient against enemy attacks.
“AI is great and will advance us, but from the military side, we always think about the unintended usages of that right. In this context you talked about humans out of the loop or for defensive purposes. This could easily be repurposed for other unintended reasons as well as, again, if we don’t protect or harden, that becomes a vulnerability that could be taken over,” Bang said.
Kris Osborn is the President of Warrior Maven – Center for Military Modernization. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University