

By Kris Osborn, Warrior
In what could be called a technological and tactical paradox, the US Army is experimenting with using AI itself to operate in coordination with human decision making to improve the accuracy and reliability of AI. Advanced, AI-enabled algorithms and being trained to improve AI itself by monitoring and verifying the decision-making process. While it might seem like AI technology might naturally be ill equipped to essentially monitor itself, Army weapons developers and technological experts are working on training and maintaining AI-enabled algorithms to strengthen a systems ability to “verify” targets, recommend countermeasures and ensure that AI-driven identifications are accurate and more trustworthy.
“We've done some work here at USARPAC (US Army Pacific) on this idea of…. where can you put an artificial intelligence agent inside the loop (decision-making loop). You have to do some training. When you put that agent into the network to help you make that identification or make that friend or foe, you still have to train it and you have to maintain it. You have to care for it and do the maintenance on the algorithm. You've got to be constantly updating the algorith,” Col. John Harvey, Director of Fires, US Army Pacific Command, told Warrior in a special counterdrone interview.
These early efforts are potentially paradigm changing to some extent, given that all the US military services and their industry partners have been working on “zero trust,” the term to describe efforts to ensure the reliability of an AI system. The challenge with AI is technologically and conceptually quite clear in some respects, as an AI-enabled system can only be as effective as its database, meaning how can it discern, analyze and process information that is “not” part of its database. An AI system will work by taking incoming sensor-gathered data, bouncing if off a vast database and, after performing the analytics, solve problems, make ideniticatoions and perform analyses. Increasingly, advanced AI-capable systems can make determinations based on “context” or a number of integrated variables to accurately process new information not trained into its database.
For example, an AI system might be positioned to discern the difference between “dance” ball or “football” in a sentence by analyzing the surrounding words. AI is now more capable of making more “subjective” types of determinations to some degree in this respect, and advanced Army thinkiers are working on leveraging AI-enabled “agents” or “algorithms” to monitor AI-determinations themselves with human supervision. The idea is “harden” AI determination and, for example, ensure a “friend or foe” determination is correct.
“This thought process of where you put agents inside of and AI-architecture that are trained and maintained by a human impacts the risk calculus. If a human owns the “shoot-don’t shoot,” a human can care, feed, monitor and train the “agent” (AI-enabled algorithm). A human can be behind that agent making sure its functioning the way it wants. If you are going to make a “shoot-don’t shoot” determination that is automated, you have to reduce the risk of whether that agent is behaving the way you anticipate and need it to behave,” Harvey told Warrior.