
A group of committed ARL scientists are working on advanced, AI-enabled computer algorithms designed to support real-time manned-unmanned “machine learning”

by Kris Osborn, President, Center for Military Modernization
(Aberdeen Proving Ground, Md) What if, when taking heavy incoming enemy fire, US Army tanks and other armored vehicles were able to launch AI-enabled autonomous drones over the other side of a ridge to find…and help destroy… additional fast approaching enemy forces? While hand-launched drones certainly exist today, the concept here would be for drones to autonomously launch, fly, land and adjust mid-flight to emerging variables to quickly learn, adjust and reposition in order to achieve mission success.
This is a paradigm-changing measure of AI-enabled machine learning which Army Research Laboratory scientists anticipate will shape, inform and potentially even drive warfare and concepts of operation 10, 20 or even 30 years from now.
A group of committed ARL scientists are working on advanced, AI-enabled computer algorithms designed to support real-time manned-unmanned “machine learning” wherein human input can enable an autonomous drone to essentially “learn” or “adapt” behaviors in response to certain contingencies. The experimental concept is based upon a certain reciprocity, meaning the machine is able to respond to and incorporate critical input from a human. The ongoing experimentation could be described in terms of an “evolution” of a certain kind, as AI-capable systems are known to only be as effective as their databases. This can at times present a certain quandary, regarding how an AI system might respond in the event that it comes across something that is not part of its database. How quickly can it assimilate and accurately analyze and organize new information which is not part of its database? ARL scientists are fast making progress with this “evolution” by training drones and autonomous systems to respond to and incorporate “human” input in real time.
“Our ultimate goal is to have AI that is robust that we can actually use in the field that does not break down when it encounters a new situation. something new. We are hoping to use this technology to better advance the ways soldiers use that technology and get that technology to adapt to the soldier,” Dr. Nicholas Waytowich, Machine Learning Research Scientist with DEVCOM Army Research Laboratory, told Warrior in an interview. DEVCOM is part of Army Futures Command.
Waytowich showed a cutting edge video demo in which a drone seeks to autonomously land on a tank, learn new behaviors and potentially respond to new mission requirements. With human-machine interface, a concept long inspiring Army thinking about future war, a drone could potentially make time-sensitive, critical adjustments and learn new information instantly.
“I can take control of the drone, and I can fly it with this…..but when I'm not controlling it, the AI will be in control. And so I can start off by giving demonstrations. But then also I can let the AI fly, if the AI hasn't learned it perfectly and starts navigating towards the truth, I can intervene and course correct,” Waytowich explains.
As part of the demonstration, Waytowich showed an drone landing autonomously on a tank, yet receiving critical new input and instructions from a human throughout its flight. The idea is not just for the drone to follow human commands but rather integrate the new information into its existing database so the behavior and the mission become known and recognizable. For instance, the drone might receive commands from a soldier regarding images its cameras pick up from beneath its purview, and instantly learn both the action itself and the images it receives, making them part of its existing “library” such that it can perform the tasks autonomously moving forward.
“We have tools to allow humans and AI to work together,” Waytowich said.
Warfare in the future is likely to involve a dangerous and unpredictable mixture of air-sea-land-space-cyber weapons, strategies and methods of attack, creating a complex interwoven picture of variables likely to confuse even the most elite commanders.
This anticipated “mix” is a key reason why futurists and weapons developers are working to quickly evolve cutting edge applications of AI, so that vast and seemingly incomprehensible pools of data from disparate sources can be gathered, organized, analyzed and transmitted in real time to human decision makers. In this respect, advanced algorithms can increasingly bounce incoming sensor and battlefield information off of a seemingly limitless database to draw comparisons, solve problems and make critical, time sensitive decisions for human commanders in war. Many procedural tasks, such as finding moments of combat relevance amid hours of video
feeds or ISR data, can be performed exponentially faster by AI-enabled computers. At the same time, there are certainly many traits, abilities and characteristics unique to human cognition and less able to be replicated or performed by machines. This apparent dichotomy is perhaps why the Pentagon and the military services are fast pursuing an integrated approach combining human faculties with advanced AI-enabled computer algorithms.
Human-machine interface, manned-unmanned teaming and AI-enabled “machine learning” are all terms referring to a series of cutting edge emerging technologies already redefining the future of warfare and introducing new tactics and concepts of operation.
Just how can a mathematically-oriented machine using advanced computer algorithms truly learn things? What about more subjective variables less digestible or analyzable to machines such as feeling, intuition or certain elements of human decision-making faculties. Can a machine integrate a wide range of otherwise disconnected variables and analyze them in relation to one another?
A drone without labeled data is referred to by ARL scientists as unsupervised learning, meaning it may not be able to “know” or contextualize what it is looking at. In effect, the data itself needs to be “tagged,” “labeled” and “identified” for the machine such that it can quickly integrate into its database as a point of reference for comparison and analysis.
“If you want AI to learn the difference between cats and dogs, you have to show it images…but I also need to tell it which images are cats and which images are dogs, so I have to “tag” that for the AI,” Dr. Nicholas Waytowich, Machine Learning Research Scientist with DEVCOM Army Research Laboratory, told Warrior in an interview. DEVCOM is part of Army Futures Command.
As rapid advances in AI continue to reshape thinking about the future of warfare, some may raise the question as to whether there are limits to its capacity when compared to the still somewhat mysterious and highly capable human brain.
Army Research Lab scientists continue to explore this question, pointing out that the limits or possibilities of AI are still only beginning to emerge and are expected to yield new, currently unanticipated breakthroughs in coming years. Loosely speaking, the fundamental structure of how AI operates is analogous to the biological processing associated with the vision nerves of mammals. The processes through which signals and electrical impulses are transmitted through the brain of mammals conceptually mirror or align with how AI-operates, senior ARL scientists explain. This means that a fundamental interpretive paradigm can be established, but also that scientists are now only beginning to scratch the surface of possibility when it comes to the kinds of performance characteristics, nuances and phenomena AI-might be able to replicate or even exceed.
For instance, could an advanced AI-capable computer have an ability to distinguish a dance “ball” from a soccer “ball” in a sentence by analyzing the surrounding words and determining context? This is precisely the kind of task AI-is increasingly being developed to perform, essentially developing an ability to identify, organize and “integrate” new incoming data not previously associated with its database in an exact way.
Dr. Nicholas Waytowich, Army Research Laboratory, Army Futures Command, told Warrior in an interview how humans can essentially “interact” with the machines by offering timely input of great relevance to computerized decision-making, a dynamic which enables fast “machine-learning” and helps “tag” data.
“If you have millions and millions of data samples, well, you need a lot of effort to label that. So that is one of the reasons why that type of solution training AI doesn't scale the best, right? Because you need to spend a lot of human effort to label that data. Here, we're taking a different approach where we're trying to reduce the amount of data that it needs because we're not trying to learn everything beforehand. We're trying to get it to learn tasks that we want to do on the fly through just interacting with us,” Waytowich said.
Building upon this premise, many industry and military developers are looking at ways through which AI-enabled machines can help perceive, understand and organize more subjective phenomena such as intuition, personality, temperament, reasoning, speech and other factors which inform human decision making. Clearly the somewhat ineffable mix of variables informing human thought and decision-making is likely quite difficult to replicate, yet perhaps by recognizing speech patterns, behavior from history or other influencers in relation to one another, perhaps machines can somehow calculate, approximate or at least shed some light upon seemingly subjective cognitive processes. Are there ways machines can learn to “tag” data autonomously? That is precisely what seems to be the point of the ARL initiatives, as their discoveries related to machine learning could lead to future warfare scenarios wherein autonomous weaponized platforms are able to respond quickly and adjust effectively in real time to unanticipated developments with precision.
“If there's a new task that you want, we want the AI to be able to know, and understand what it needs to do in these new situations. But you know, AI isn't quite there yet. Right? It's brittle, it requires a lot of data…..and most of the time requires a team of engineers and computer scientists somewhere behind the scene, making sure it doesn't fail. What we want is to push that to where we can adapt it on the edge and just have the soldier be able to adapt that AI in the field,” Waytowich said.
AI in Application
Targets emerge in seconds, incoming enemy fire puts lives at risk and shifting combat dynamics require immediate, on-the-spot decisions in a matter of seconds -- all as soldiers navigate the complex web of threats during all-out, high-risk ground-warfare.
These kinds of predicaments, which characterize much of what soldiers train to face, are immeasurably improved by emerging applications of AI; artificial intelligence can already gather, fuse, organize and analyze otherwise disparate pools of combat-sensitive data for individual soldiers. Target information from night vision sensors, weapons sights, navigational devices and enemy fire detection systems can increasingly be gathered and organized for individual human soldier decision-makers.
However, what comes after this? Where will AI go next in terms of changing modern warfare for Army infantry on the move in war? The Army Research Laboratory is now immersed in a complex new series of research and experimentation initiatives to explore a “next-level” of AI. Fundamentally, this means not only using advanced algorithms to ease the cognitive burden for individual soldiers -- but also network and integrate otherwise stovepiped applications of AI systems. In effect, this could be described as performing AI-enabled analytics on groups of AI systems themselves.
“Autonomy is doing things in a snipped way that can be connected. We can benefit from an overarching AI approach, something that looks at the entire mission. Right now our autonomy solves very discreet problems that are getting more complicated,” J. Corde Lane, Ph.D.,Director, Human Research and Engineering, CCDC-Army Research Laboratory,told Warrior in an interview.
What does this mean? In essence, it translates into a way combat commanders will not only receive AI-generated input from individual soldiers but also be able to assess how different AI systems can themselves be compared to one another and analyzed as a dynamic group. For instance, Lane explained, perhaps multiple soldier-centric AI-empowered assessments can be collected and analyzed in relation to one another with a mind to how they impact a broader, squad-level combat dynamic. In particular, simultaneous analysis of multiple soldier-oriented AI system can help determine a best course of action for an entire unit, in relation to an overall mission objective.
“What is the entire mission and possible courses of action? Do we optimize the logistics flow? Find targets as the dynamic battlefield gets more complex? The Commander can draw upon advanced AI to explore new options,” Lane explained.
Therefore, in addition to drawing upon algorithms able to organize data within a given individual system, future AI will encompass using real-time analytics to assess multiple systems simultaneously and they how impact one another to offer an overall integrated view. All of this progress, just as is the case now, will still rely heavily upon human decision-making faculties to optimize its added value for combat. Integrating a collective picture, drawing upon a greater range of variables will require soldiers to incorporate new tactics and methods of analysis to best leverage the additional available information.
“When we have new and improved autonomy coming in, soldiers need to know how to use that. How do you keep the soldier always at the center and adapt to them as you adapt to the new AI?” Lane asked.
Perhaps one soldier receives organized sensor-driven targeting data relevant to a specific swath of terrain, while another AI system is organizing variables to determine the supply flow of ammunition, fuel or other logistical factors.
“Data never seen cannot be learned. It is not about AI, but combining AI with a soldier who has the concept of an entire mission. AI provides information and then they get put together. When you are under fire, you are going to need different types of information,” Lane explained.
For example, comparing and analyzing various AI systems to help engender a collective picture of some kind might enable a commander to know ….“If you go this way you will use more fuel but it will be safer…” as Lane explained.
Interestingly, Lane’s point about the irreplaceable characteristics of human cognition in the face of new AI-driven technologies is anticipated in a 2017 essay from the Chatam House Royal Institute of International Affairs, called “Artificial Intelligence and the Future of War.” The essay, written by M.L. Cummings, states that “replicating the intangible concept of intuition, knowledge-based reasoning and true expertise is, for now, beyond the realm of computers.”
Mathematically oriented computer algorithms naturally face limitations when it comes to things like judgments, feelings or quickly assessing not-yet-seen information; an AI-database can only be as effective as the information it already has stored in its database. While Machine-Learning techniques continue to accelerate the pace at which an existing AI database can quickly integrate and perform analytics on new information, AI-infused computing can only make decisions or solve problems in relation to the information it already has stored. Now it goes without saying that these databases are increasingly vast, almost seeming limitless, yet they do need to consistently be fed with not-yet-stored information of great relevance to wartime decisions.
The Chatam House essay puts it this way:
…...Every autonomous system that interacts in a dynamic environment must construct a world model and continually update that model. This means that the world must be perceived (or sensed through cameras, microphones and/or tactile sensors) and then reconstructed in such a way that the computer ‘brain’ has an effective and updated model of the world it is in before it can make decisions. The fidelity of the world model and the timeliness of its updates are the keys to an effective autonomous system…. (“Artificial Intelligence and the Future of War” M.L. Cummings)
Some future-oriented research and AI-work is now analyzing methods of successfully performing analytics on more subjective nuances associated with human perception and behavior - such as speech patterns or catalogued information regarding previous behaviors, tendencies or decisions. Nonetheless, not only is this work early on, but it does not promise to mitigate some of the known limitations of AI. This question is, interestingly, taken up in a Switzerland-based academic journal known as “Information.”
The 2018 essay, called “Artificial Intelligence and the Limitations of Information,” entertains some of the challenges associated with AI to include complexities related to “meanings” and “inferences.” The journal article, written by Paul Walton, says certain nuances “we are prone to ignore, is at the heart of many fundamental questions about information. Truth, meaning, and inference are expressed using information, so it is important to understand how the limitations (of AI) apply,” the essay states. (Information is a peer-reviewed scientific journal published monthly by MDPI.)
Walton also cites a difficulty for AI to fully synthesize or compare some different “ecosystems” of information. For example, certain kinds of information collection systems might be unique to individual data sets; the essay, for instance, says financial data compilation may differ from processes used by mathematicians. Therefore, resolving potential differences between what the essay calls “multiple interactions” might prove difficult.
Future AI will, the article explains, need to “analyze the integration challenges of different AI approaches—the requirements for delivering reliable outcomes from a range of disparate components reflecting the conventions of different information ecosystems.”
All this being said, the current and anticipated impact of fast-progressing AI continues to be revolutionary in many ways; it goes without saying that it is massively changing the combat landscape, bringing unprecedented and previously unknown advantages. AI is progressing quickly when it comes to consolidating and organizing data from otherwise separate sensors on larger platforms, such as an F-35 or future armored vehicle…. yet integrating some of these same technical elements has not reached dismounted infantry to the same extent.
For example, Dillon further elaborated that these kinds of emerging algorithms can quickly distinguish the difference between someone extending a weapon or merely digging a hole -- or recognize enemy armored vehicles. The AI-empowered system could also quickly cue a combat analyst so, as he put it…”they don’t spend time pouring over massive amounts of data.”
The concept here is not so much the specific systems as it is a need to engineer and adaptable technical infrastructure sufficient to evolve as technology changes. Lane described this as a co-evolution between indispensable human cognition and decision-making and AI-enabled autonomy. The ARL works closely with Army Futures Command’s Soldier Lethality Cross-Functional Team which, among other things, is focused on this concept of extending Soldier as a System architecture across an Army Squad unit. (Adaptive Squad Architecture)
“In some cases the better chance of victory will be due to faster adaptability. Creating intelligent systems that are able to self-adapt to Soldiers' needs and seamlessly adjust as Soldiers adapt to the changing situation promotes rapid co-evolution between Soldiers and autonomy,” Lane added.
The Army and its industry partners are now working on advancing the algorithms, writing the code, upgrading hardware and software and engineering the standards through which to create interfaces between nodes on a soldier or between groups of soldiers. For instance, Dillon explained that some of these nodes could include laser designators, input from radio waves or data coming in from satellite imagery overhead. “Computers are so much faster,” as Dillon put it, explaining that algorithms are now being advanced to “train at scale” to analyze a series of images and pinpoint vital moments of relevance.
““We build out algorithms we could run on some kind of soldier-worn system such as a small form factor computer, thermal imaging, daytime cameras or other data coming in quickly through satellites. When you network all of this together and bring in all the sensor data, machine learning can help give soldiers the accurate prompts,” Dillon added. “The more we do this, the smarter the algorithms get.”