By Kris Osborn, President, Center for Military Modernization
The Pentagon may have no intention of deploying fully-autonomous “lethal” terminator-type robots that kill without human intervention, but that does not mean the Army and its fellow services are not preparing to fight them. Sure enough, enemy forces are quite likely if not fully expected to deploy thousands of autonomous armed robots programmed to attack without human intervention.
Army Futures Command is working on a cutting edge dual trajectory, which includes recognizing and applying the breakthrough merits of AI in areas such as surveillance, networking, automated defensive weapons and data scaling and management, while at the same time anticipating a future warfare environment in which adversaries send fully autonomous attack robots to kill without regard for ethics.
“We are a value-based military and will remain so. Through experimentation we will determine what levels of autonomy are needed and are acceptable on the future battlefield. That will be incorporated into doctrine by our teammates over at TRADOC,” Mr. William Nelson, Deputy, Army Futures Command, told Warrior in an interview.
Can the US military, and the Army, prepare to destroy autonomous, terminator-type enemy robots? Anticipate and counter their movements? Stay in front of their sensor-to-shooter time curve and targeting precision? … all while maintaining its clear ethical and doctrinal stance that humans must be “in the loop” making decisions about the use of lethal force,
“AI to Destroy Drones, Missiles & Artillery”
As technology continues to breakthrough and AI becomes more reliable, there may perhaps be what the Pentagon calls “out-of-the-loop” AI wherein automated weapons linked to sensors and AI-systems can employ force for purely defensive, non-lethal purposes.
The Army has some history with these kinds of challenges, and has for years been deliberate about ensuring its doctrine and rules of engagement account for ethical considerations. In 2009, for instance, the Army re-emphasized the importance of “human-in-the-loop” doctrine by writing added language ensuring “human-in-the-loop” with lethal force. This became necessary because, at the time, the Army had successfully demonstrated its Multifunction-Utility Logistics Equipment (MULE) vehicle, a robotic platform empowered by what the Army called Autonomous Navigation Systems (ANS) capable of autonomously tracking targeting and destroying enemy targets with Javelin anti-tank missiles. At this time, the ability for robots to kill without human intervention was essentially here, and the Army made specific efforts to ensure ethical and doctrinal clarification about “human-in-the-loop.”
The Army is now again exploring this with its family of Robotic Combat Vehicles, some of which will operate with an ability to fire anti-tank weapons. Wouldn’t AI massively speed up a robot’s ability to find and destroy enemy targets? The answer is yes, however the Army is focused on leveraging this speed and combat efficiency in a way that accommodates and fully aligns with pressing doctrinal and ethical concerns.
“Out of the Loop AI”
Certainly the need for human decision-making remains critical for Pentagon weapons developers, futurists and technology experts, but what about the use of high-speed, paradigm-changing AI for non-lethal or defensive purposes? The Pentagon is now exploring this through an effort it called “out of the loop” AI.
Could an AI-enabled system help identify and intercept attacking drones, unmanned systems and robot vehicles while also operating as an autonomous interceptor able to “see” “track” and “destroy” or “intercept” fast incoming enemy missiles, artillery, mortars or even hypersonic weapons. The speed of computer processing, data analysis and transmission is so paradigm-changing, that sure enough the Army is already using AI to massively reduce or truncate sensor-to-shooter time from 20 minutes down to a matter of seconds. This process, demonstrated at Army’s Project Convergence, can reduce this identification and attack process to seconds while still enabling human decision-makers to make the ultimate decision about the use of lethal force. In Project Convergence, for instance, an AI-enabled system called Firestorm not only analyzed data from otherwise dissociated sensor node sources of incoming data but also recommended the optimal “shooter” or “weapon” for a given threat circumstance which a human can approve in a matter of seconds.
“I think we will fight differently, but to what degree will it be full autonomous? I think it varies. I am a big believer in data. We will use test and experimentation data to inform policy. We want to support any changes with data. We want to show that this thing can behave with a level of confidence using AI and advanced algorithms. That requires a fair amount of data collection and testing,” Nelson said.
This expedited sense, identify, verify and destroy cycle can just as easily be leveraged to use force in a matter of milliseconds for purely defensive, non-lethal purposes. This premise is the basis of the Pentagon’s current “out of the loop” initiative specifically focused on the use of force for “non-lethal” purposes.
However, what about fighting an Army of autonomous robots programmed by an enemy to track, surveil, target and “attack” without any human verification? Certainly inanimate attack robots could be destroyed by force without needing human intervention as the force used to destroy them would be “non-lethal” since the enemy robots will not be alive human beings.
Nonetheless, will the US be disadvantaged in a tactical sense by virtue of insisting on clear ethical parameters for AI and the use of force? What about those critical instances where the demands of war require the use of “lethal force?” Can the Army be fast-enough to attack and kill while maintaining its doctrinal and ethical restrictions at the same time? Certainly seems likely this is something the Pentagon is working on with great intensity, and this predicament could very well be a key reason why DoD is progressing with its “out-of-the-loop” non-lethal, autonomous force. Also, as is the case with most new warfighting technology, the Army understands that new technologies generate the need for new requirements, maneuver formations and concepts of operation.
Certainly inhuman machines and armed attack robots could be countered with AI-enabled weapons as they could be destroyed without the need to use “lethal” force to kill people.However “life-like” they may seen, armed robots can be destroyed without killing any people, to put it simply.
Regardless, Army Futures Command is likely testing and preparing its weapons with a specific mind to a future combat environment which will require an effective use of AI in order to prevail. The most important element of this, Nelson described, can be analyzed in the context of “data tagging” and “data labeling.”
Improving the reliability of AI is a high-priority Pentagon and Army effort referred to as “Zero Trust.” Innovations in this area pertain to the discovery of new kinds of high-speed machine learning and data interpretation and analysis.
Some of the challenges associated with Zero Trust, Nelson explained, relate to the labeling of data. It is often said that an AI system is only as effective as its database, so the more “verified” the information across a wide sphere of necessary areas.
“In a lot of cases the limiting factor comes down to the data labeling aspects. In many cases you might want Zero Trust today, but if the data isn’t labeled or usable in that format, converting it or getting it into a usable format may be insurmountable,” Nelson.
Once an AI-enabled system receives and organizes new incoming data, it must perform “analytics” to bounce it
off of a seemingly limitless pool of verified information. The challenge then becomes, what happens when an AI-capable system receives input or information which is “not” part of its database? Can it determine meaning and perform the necessary analytics based on “context” or surrounding words? Additional variables or related pieces of information? This is the cutting edge of AI, an area of testing and exploration which continues to receive a lot of attention at the Pentagon and from its industry and academic partners.
Also, as Nelson referred to, there could be “format” challenges involving time-sensitive or combat-sensitive information arriving through different transport layers. For instance, what if one set of critical data arrives from a GPS signal, another from an RF system or datalink with yet a third arriving through optical or other wireless means? As the former director of Army Futures Command Position Navigation and Timing Cross Functional Team, Nelson is quite familiar with the need to organize, analyze and transmit time-sensitive data, a task which oftentimes increasingly requires “gateway” systems to essentially “translate” incoming data from different formats. Gateway systems, which Nelson said were being extensively worked on by Army Futures Command and its partners, are intended to aggregate, organize and analyze otherwise disparate pools of incoming data to create an integrated picture. Gateway can work by using certain interoperable technical or common standards or IP protocol to enable interoperability between otherwise incompatible systems.
Such a challenge raises the question as to just what extent AI-capable systems can obtain an “integrated,” “complete,” or “accurate” picture from seemingly disaggregated data streams? To what extent can AI-empowered systems analyze a host of different “variables” in relation to one another or a completed whole?
The Pentagon, Army Futures Command and other services are working on this by seeking to craft AI-systems which can arrive at a wholistic, integrated picture from a range of nuanced, complex, yet interwoven indicators, variables or pools of data. For example, an adversary might know that AI-enabled surveillance can quickly bounce video images of a certain tank or armored vehicle against an established data library to make an instant determination or identification of a specific target. An AI-capable system, for instance, can easily discern the difference between a T-72, T-90 or T-14 Armata enemy tank. However, an adversary might know that AI-enabled ISR is capable of this and therefore seek to “spoof” the algorithm with something as simple as placing a “poster” or other obscurant on top of the vehicle to “confuse” the algorithm tasked with identifying it. This potential scenario was explained to Warrior by one of the Army’s key industry partners working on new generations of AI. In response to these kinds of challenges, the Pentagon and its industry partners are exploring AI-systems capable of analyzing a host of different variables and indicators in relation to one another. For instance, in the event that an enemy tank is covered by a poster, perhaps the AI-capable systems will analyze a “heat signature,” surrounding vehicles or other factors such as terrain or previously verified activity. Cutting edge applications of AI are now doing this, essentially exploring and assessing a variety of factors in relation to one another to remain “accurate” and complete in the event that some indicators are spoofed or simply incorrect. This can help ensure accuracy and reliability, something which more closely approaches the goal of Zero Trust.
AI is already extremely accurate and fast in a variety of ways, yet improving its accuracy in part through these kinds of initiatives are the methods through which the Army and other services seek to engineer AI systems capable of analyzing more subjective phenomena and scenarios less likely to lend themselves to clear mathematical analysis. Can a mathematically determined computer system determine emotional sensibilities, ethical parameters, philosophical variations or elements of human consciousness and decision-making? This, according to Army Futures Command, its industry partners and fellow service partners such as the Air Force Research Laboratory, is the cutting edge of AI.
“AI today is very database intensive. But what can and should it be in the future? How can we graduate it from just being a database to something that can leverage concepts and relationships, or emotions and predictive analyses? And so there’s so much more that we as humans can do that AI cannot? How do we get to that?,” Maj. Gen. Heather Pringle, former Commanding General of the Air Force Research Lab, told Warrior in an interview earlier this year.
Kris Osborn President of Warrior Maven – Center for Military Modernization. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.