By Kris Osborn, President, Center for Military Modernization
In an interesting interview recently on CNN, former President Barack Obama was asked about the future of AI and the various philosophical, technological and ethic variables now dominating discussion as AI-technology explodes and many consider its implications. While being clear to emphasize that AI continues to bring paradigm-changing innovations to the world, he used succinct language to sum up what is perhaps the most significant complication or challenge when it comes to the application of AI …. “Machines can’t feel joy,” he told CNN.
Obama said this in the context of describing how the advent and rapid arrival of new applications of AI continue to change things rapidly bringing seemingly limitless new promise and also introducing challenges and complexities. He was quick to praise the merits of AI in his discussion with CNN, but also mentioned the challenges or limitations, given that uniquely human attributes such as emotion, devotion and other more subjective phenomena can’t be approximated by machines. True enough, and while defense industry innovators and critical Pentagon institutions such as the Air Force Research laboratory are making progress exploring ways AI can somehow estimate, calculate or analyze more subjective phenomena, there are clearly many variables unique to human cognition, intuition, psychological nuances, ethics, consciousness and emotion which it seems mathematically-generated algorithms simply could not replicate or even begin to truly approximate accurately. This is why leading weapons developers are quick to explain that any optimal path forward involves a blending or combination involving what could be called the Pentagon’s favorite term .. “manned-unmanned teaming.”
This, however, does not mean the merits and possibilities of AI should be under-estimated, as senior researchers with the Army Research Laboratory have explained that “we are at the tip of the iceberg” in terms of what AI can truly accomplish. This is why the Pentagon is measuring the rapid success and promise of AI in the context of non-lethal defensive force. The combination of human decision-making faculties, when coupled with the speed and analytical power of high-speed AI-generated computing, is already creating paradigm-changing innovations. Imagine how many lives a defense AI-weapons system could save? AI is also already massively shortening the sensor-to-shooter curve in key modern warfare experiments such as the Army’s Project Convergence.
These complexities are the main reason why there continue to be so many technological efforts to improve the “reliability” of AI-generated analysis, so that, through machine learning and real-time analytics, machines can determine context and accurately process new material that might not be part of its database. This, as described to Warrior by former Air Force Research Laboratory Commander Maj. Gen. Heather Pringle, is the cutting edge new frontier of AI.
“AI today is very database intensive. But what can and should it be in the future? How can we graduate it from just being a database to something that can leverage concepts and relationships, or emotions and predictive analyses? And so there’s so much more that we as humans can do that AI cannot? How do we get to that?,” Maj. Gen. Heather Pringle, former Commanding General of the Air Force Research Lab, told Warrior in an interview earlier this year.
“Out of the Loop AI” Can Save Lives
Should something like a swarm of mini-drone explosives close in for an attack or a salvo of incoming hypersonic missiles approach at speeds five times the speed of sound, human decision makers simply might not be able to respond quickly enough. In fact, military commanders may not get any chance to counterattack or determine the best course of defensive action.
Not only would there not be time for a human decision-maker to weigh the threat variables, but weapons operators themselves may simply be too overwhelmed to detect, track, engage or fire upon high-speed simultaneous attacks should they receive orders. There just simply is not time.
Human in the Loop
The advent and rapid maturity of Artificial Intelligence for military technology, weapons and high-speed computing has many asking a pressing and pertinent question … just how soon until there is a “TERMINATOR”-type armed robot able to autonomously find, track, target and destroy targets without needing any human intervention?
The answer is that in certain respects that technology is already here … however there are a host of complex conceptual, technological, philosophical and policy variables to consider. Tele-operated armed robots have been in existence and even sent to war for many years now, meaning weapons systems remotely controlled by a human being without machines making any decisions or determinations about lethal force. This is fully aligned with current and long standing Pentagon doctrine which says there must always be a human in the loop when it comes to decisions regarding the use of lethal force. What about non-lethal force? This cutting edge question is now very much on the Pentagon’s radar, given the rapid maturation of AI-empowered decision-making abilities, analytics and data organization.
Essentially, should an AI-enabled system, which aggregates and analyzes otherwise disparate pools of incoming sensor data be able to accurately discern the difference between lethal and non-lethal force? Could AI-enabled interceptors be used for drone defense or a method of instantly taking out incoming enemy rockets, drones, artillery or mortars?
“Right now we don’t have the authority to have a human out of the loop,” Col. Marc E. Pelini, the division chief for capabilities and requirements within the Joint Counter-Unmanned Aircraft Systems Office, said during a 2021 teleconference, according to a Pentagon report published last year. “Based on the existing Department of Defense policy, you have to have a human within the decision cycle at some point to authorize the engagement.”
Video Above: Air Force Scientists Expand AI-Enabled Data Sharing Between Bombs “In Flight”
However, is the combination of high-speed, AI-enabled computer and sensor-to-shooter connectivity, coupled with the speed and sphere of emerging threats beginning to impact this equation? Perhaps there may indeed be some tactical circumstances wherein it is both ethical and extremely advantageous to deploy autonomous systems able to track and intercept approaching threats in seconds, if not milliseconds.
Speaking in the Pentagon report, Pelini explained that there is now an emerging area of discussion pertaining to the extent to which AI might enable “in-the-loop” or “out-of-the-loop” human decision making, particularly in light of threats such as drone swarms.
The level of precision and analytical fidelity AI-now makes possible is, at least to some extent, inspiring the Pentagon to consider the question.
Advanced algorithms, provided they are loaded with the requisite data and enabled by machine learning and the analytics necessary to make discernments, are now able to process, interpret and successfully analyze massive and varied amounts of data.
Complex algorithms can simultaneously analyze a host of otherwise disconnected variables such as the shape, speed, contours of an enemy object as well as its thermal and acoustic signatures. In addition, algorithms can now also assess these interwoven variables in relation to the surrounding environment, geographical conditions, weather, terrain and data regarding historical instances where certain threats were engaged with specific shooters, interceptors or countermeasures. AI-enabled machines are increasingly able to analyze in a collective fashion and which response might be optimal or best suited for a particular threat scenario?
Can AI-enabled machines make these determinations in milliseconds in a way that can massively save lives in war? That is the possibility now being evaluated in a conceptual and technological sense by a group of thinkers, weapons developers and futurists exploring what’s called an “out of the loop” possibility for weapons and autonomy. This is quite interesting, as it poses the question as to whether an AI-enabled or autonomous weapons system should be able to fire, shoot or employ force in a “non-lethal” circumstance.
Of course there is not a current effort to “change” the Pentagon’s doctrine but rather an exploration, as the time window to defend forces and deploy countermeasures can become exponentially shorter in a way that could save lives in war should US forces come under attack. Technologically speaking, the ability is, at least to some degree, here, yet that does not resolve certain ethical, tactical and doctrinal questions which accompany this kind of contingency.
One of the country’s leading experts on the topic of AI & Cybersecurity, a former senior Pentagon expert, says these are complex, nuanced and extremely difficult questions.
“I think this is as much a philosophical question as it is technological one. From a technology perspective, we absolutely can get there. From a philosophical point of view, and how much do we trust the underlying machines, I think that can still be an open discussion. So a defensive system that is intercepting drones, if it has a surface to air missile component, that’s still lethal force it misfires or misidentified. I think we have to be very cautious in how we deem defensive systems non lethal systems if there is the possibility that it misfires, as a defensive system could still lead to death. When it comes to defensive applications, there’s a strong case to be made there……but I think we really need to look at what action is being applied to those defensive systems before we go too out of the loop,” Ross Rustici, former East Asia Cyber Security Lead, Department of Defense, told Warrior in an interview.
Rustici further elaborated that in the case of ‘“jamming” or some kind of non-kinetic countermeasure which would not injure or harm people if it misfired or malfunctioned, it is indeed much more efficient to use AI and computer automation. However, in the case of lethal force, there are still many reliability questions when it comes to fully “trusting” AI-enabled machines to make determination.
“The things that I want to see going forward is having some more built in error handling so that when a sensor is degraded, when there are questions of the reliability of information, you have that visibility as a human to make the decision. Right now there is a risk of having data which is corrupted, undermined or just incomplete being fed to a person who is used to overly relying on those systems. Errors can still be introduced into the system that way. I think that it’s very correct to try to keep that human-machine interface separated and have the human be a little skeptical of the technology to make sure mistakes don’t happen to the best of our ability,” Rustici explained.
Video Above: Kris Osborn sits down for an interview Jay Wisham, Director of AAL, Army Applications Laboratory and Mike Madsen, Strategy Director, DIU
This has been and continues to be a pressing and long-term doctrinal question, particularly given the pace of technological advancement in the realm of autonomy and AI. As far back as 2009, the Army was developing an armed robot called the MULE, Multi-Utility Logistics & Equipment vehicle. The platform was a 10-foot rectangular robot armed with Javelin anti-tank missiles, developed years ago for the Army’s previous Future Combat Systems program.
The program was canceled, however advanced weapons developers at the time made a special effort to author clear reiterated doctrine regarding the application of lethal force. Even at that time, the robot was being developed to use Autonomous Navigation Systems or ANS to track ,find, target and destroy enemy tanks without “needing” human intervention to perform the task. For this reason, the Army made a special effort to reaffirm its critical “human-in-the-loop” doctrinal requirement.
The technology has certainly evolved since then, however despite the promise and fast-improving performance of AI-enabled systems, there seem to simply be too many ineffable or incalculable attributes and qualities associated with human cognition for mathematically-oriented machines to accurately capture. There are subjective phenomena unique to human cognition which it does not seem machines would be able to replicate. What about emotion? Intuition? Imagination? Certain conceptual nuances or variations and ambiguity with meaning in language?
While advanced algorithms are now making great progress with an ability to assess multiple variables in relation to one another and, for instance, discern the difference between “football” and “dance ball” based on context and surrounding language, that does not mean machines can truly replicate human “consciousness.” There seem to be so many elements to human cognitive phenomena, many of which are not yet fully understood and not something machines can simulate, replicate or accurately “capture” to a large extent.
“At this point, I don’t think anybody trusts the machines to get it 100%, right, 100% of the time. I think it is the right call to always have a human loop, we’re always going to be very dependent upon the data that’s being fed,” Rustici said.
This predicament is part why, despite the promise of AI, there is also some cause for pause or hesitation when it comes to the reliability of advanced algorithms. For instance, an AI-enabled machine is said to only be as effective as its database … so what happens if a machine encounters something fully unfamiliar or simply “not” in its database? The term military and industry experts use to describe this is called “Zero Trust,” referring to the extent to which algorithmic determinations might not be at the requisite level of “reliability” to be fully trusted, especially when it comes to the use of lethal force.
“The things that I want to see going forward are having some more built in error catching so that way when a sensor is degraded, when there are questions of the reliability of information, you have that visibility as a human to make the decision. Right now there is a risk of having data which is corrupted, undermined or just incomplete being fed to a person who is used to overly relying on those systems. Errors can still be introduced into the system that way. I think that it’s very correct to try to keep that human-machine interface separated and have the human be a little skeptical of the technology to make sure mistakes don’t happen to the best of our ability,” Rustici
said.
These questions are very much on the minds of computer scientists at the Air Force Research Laboratory, who are exploring the extent to which AI-capable systems can begin to accurately analyze more subjective phenomena.
Video Above: Warrior Maven – Air Force Research Lab Explores AI Breakthroughs
“AI today is very database intensive. But what can and should it be in the future? How can we graduate it from just being a database to something that can leverage concepts and relationships, or emotions and predictive analyses? And so there’s so much more that we as humans can do that AI cannot? How do we get to that?,” Maj. Gen. Heather Pringle, Commanding General of the Air Force Research Lab, told Warrior in an interview.
Can AI-capable systems, supported by machine learning, perform close to real-time analytics on data or incoming information it might not recognize? This is where the question of “collective” AI comes in, something which is of great relevance to things like drone defenses or any kind of defensive countermeasure. Machines can increasingly determine elements of context and, for instance, look in a more holistic fashion at a range of factors in relation to one another.
For example, industry experts have raised the interesting question of whether an AI-enabled sensor or reconnaissance systems of some kind would still be able to positively identify an enemy “tank” if it were obscured or hidden by a large poster, trees or other deception technique intended to elude sensor detection.
Advanced development of AI is heading in this direction, as described in the case of drone defenses or an examination of multiple variables in relation to one another. If, for instance, a tank were hidden or obscured, perhaps an AI-capable system could integrate heat signature data, track marks or terrain features to make an identification based on a holistic or collective assessment of a complete picture comprised of many variables. Can algorithms and data sets be “tagged” and “trained” to make more subjective kinds of determinations using advanced machine learning techniques and advanced, high-speed analytics?
Layering
One possible solution or area of inquiry related to these questions may well be best addressed by what Rustici called “layering.”
“When you’re looking at most of these military applications, you essentially want to layer on as much information as, but the data itself is often discrete and differentiated. So if you’re looking at an anti drone system, you’re going to need some sort of sensor that likely can do optical recognition of the drone itself, you’re likely going to have to do some type of sensing around heat signatures, or radio signal detection to try to identify how it’s being run,” Rustici said.
When it comes to drone defenses, for example, an AI-capable system might be able to draw upon threat data, atmospheric data, geographical data and other indicators such as the thermal or acoustic signature of an object. As part of this, an AI database could be loaded with historical detail regarding previous combat scenarios wherein certain defenses were used against certain threats in particular scenarios. Drawing upon all this kind of data, and analyzing a host of variables in relation to one another, an AI-empowered system could identify an “optimal” course of action for human decision-makers to consider. Perhaps foggy weather means a laser countermeasure might not work due to beam attenuation? Perhaps a threat is approaching over an urban area where a kinetic explosion might injure civilians with debris and fragmentation, so a non-kinetic method of defense is recommended?
This kind of sensor to shooter pairing has already been demonstrated by the Army’s AI-capable computer called “Firestorm” which used aggregated pools of otherwise disparate sets of incoming sensor data to “pair” sensors to shooter in a matter of seconds to make recommendations to human decision-makers. This has been demonstrated several times during the Army’s Project Convergence experiment designed to prepare the Army to fight war at the speed of relevance. Through these experiments, which began in 2020, the Army has shortened or truncated sensor to shooter time from 20 minutes…down to 20 seconds, using a combination of high-speed, AI-capable algorithms and human decision making together.
Video Above: Maj. Gen. Pringle Manned-Unmanned Teaming
Mannned-Unmanned Teaming
All of these variables are key reasons why senior Pentagon weapons developers and scientists continue to emphasize the importance of manned-unmanned teaming or “human-machine interface.” A blended strategy of this kind makes sense and could even be described as optimal to a large degree, as it can combine or blend the high-speed data organization, analysis and processing capability of AI-enabled machines with those variables, characteristics and attributes unique to human cognition.
This is why, for example, the Air Force is testing human pilots flying missions with a computerized, AI-capable “co-pilot.” Certain things machines will do much better and much faster, yet human decision making is needed to make many more nuanced determinations amid dynamic, fast-changing warfare circumstances and perform critical verifications in the event an algorithm is unable to reliably make vital identifications and distinction or, perhaps of even greater consequence, is “spoofed, jammed” or simply fed false information by an adversary.
Several years ago, the Air Force made history by, for the first time ever, flying a piloted U-2 spy plane with an AI-enabled computer co-pilot. The AI algorithm, called ARTUu, flew along with a human pilot on a U-2 Dragon Lady spy plane, performing tasks that would “otherwise be done by a pilot,” an Air Force report explained. “ARTUu’s primary responsibility was finding enemy launchers while the pilot was on the lookout for threatening aircraft, both sharing the U-2’s radar,” the Air Force report said.
All of this eventually leads to a final or most consequential question, meaning if a “terminator” type robot can be used in war … “should it?” Certainly the Pentagon views these technical questions through an ethical and moral lens, but there is certainly no guarantee an enemy will do the same. Therefore, soldiers may need to prepare to fight autonomous robots, one reason why range, networking and the use of unmanned systems is so heavily emphasized when it comes to modern Combined Arms Maneuver.
“I would say, as we continue to evolve from a systems perspective, we’re going to have several moral questions on what we should deploy and how we should deploy it. It is not purely a question of, can we at this point, it’s how best to and I think that’s going to be the hardest thing for us to wrestle with from both a technology and a policy perspective,” Rustici said. “What am I comfortable doing with a conscience as a person that a machine is just never going to get to in detail…..and that’s really the place where you have ambiguity and you need to keep that human in that particular loop,” Rustici said.
Kris Osborn President of Warrior Maven – Center for Military Modernization.
Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.