By Kris Osborn, President, Center for Military Modernization
How soon will the US need to be prepared to fight armies of autonomous robots? The answer, while unclear in some respects, may be “pretty soon.” As cliche as “terminator” types of comparisons circulate within analysis of robotics, autonomy and AI, there is something quite “real” about this possibility to a degree.
The consequences are serious, because while the Pentagon and US services heavily emphasize “ethics” with AI and the need to ensure a “human in the loop” regarding the use of lethal force, there is little to no assurance that potential adversaries will embrace a similar approach. This introduces potentially unprecedented dangers not lost on the Pentagon, a reason why there are so many current efforts to optimize the use of AI, autonomy and machine learning when it comes to combat operations and weapons development.
We have all heard much discussion about the term “zero trust” referring to critical, high-priority efforts to “secure” AI and make it more reliable. Certainly the merits of AI are well established, however a clear and often addressed challenge or paradox continues to take center stage; an AI system can only be as effective as its database, so what happens when an advanced AI-enabled system encounters something it has not “seen” or processed before? This certainly presents the prospect of a “false positive” or incorrect analysis to some extent, generating the kind of concern now functioning as an “impetus” for zero trust and the need to make AI more reliable.
Much progress has already taken place in this respect, as advanced algorithms can now discern meaning and accurately analyze seemingly unknown ideas and objects by analyzing “context” and surrounding indicators. For instance, algorithms are increasingly able to distinguish the difference between foot “ball” and dance “ball” by assessing the surrounding words to accurately assess the conceptual framework. AI-enabled “analytics” continues to happen faster and faster, with the goal of engineering a system which can perform near “real-time” analytics and make reliable and accurate determinations of both recognized or seemingly unrecognized indicators.
Army “Defend AI”
The Army has been working intensely on this, and while making progress, surging into the next steps associated with securing AI …. “Defending” AI from attacks and countermeasures.” As the first service planning an actual AI program of record in its 5-year budget planning (POM), the Army has launched a “100-day” AI-risk assessment pilot to advance efforts to secure, harden and improve AI-enabled systems and the promise they bring.
“Subsequent to the 100-day implementation plan will be a 500-day implementation plan. We set the conditions with the 100-day plan and with the 500-day effort we are going to operationalize that. We are the first service that has POM’d (program objective memorandum) a program of record for Artificial Intelligence,”. Mr. Young Bang, Principal Deputy, Assistant Secretary of the Army – Acquisition, Logistics & Technology, told Warrior in an interview.