It is a never-ending debate .. AI-enabled fighter jets at times winning dogfights in simulations against manned fighter jets balanced against the uniqueness of human cognition. Is one ultimately superior to another? There is much still unknown or considered mysterious about the human brain, as it is a complex, varied organism capable of mathematical functions as well as many more subjective task such as inspiration, feeling, emotion or intention … things much less calculabe by mathematically-engineered computer algorithms and “1s and 0s.”
However, AI is progressing very quickly and even, in the minds of some expert developers, growing in its ability to measure or catalogue and analyze more subjective variables. The pace of change, AI-oriented scientists with the Army Research Lab have told me, is incredible. Nonetheless, with an inorganic computer ever be able to truly assess less calculable phenomena such as “intention, intuition, more abstract philosophical concepts and the impact of emotion upon human behavior?” … the answer may be no, at least for the near term future.
Despite the ongoing discussion, and the growing performance of AI-generated systems in simulations, augmented reality and mock dogfights, there is a case to be made that the argument is .. at least for the moment...solved. There is an emerging consensus which, one could say, “punts” on the question of which one is ultimately superior and simply says … both are needed together. Long-term, that may be the optimal solution, and it seems to be what Army, Air Force and Navy weapons developers are emphasizing. This being said, there is no question that computers, especially AI-empowered analytical systems, are exponentially faster, better and more efficient than humans, as they can perform millions or organizational, analytical and comparative functions in milliseconds, both saving time and massively increasing efficiency. This is why weapons developers from all services emphasize the use of man-machine teaming to ease the cognitive burden upon human decision makers, save time and enable humans to focus upon the key decisions they are uniquely positioned to make.
For example, Maj. Gen. Ross Coffman, Director of the Next-Generation Combat Vehicle Cross Functional team, an Army developmental unit now developing robots for future war, told me the the modernization process involving future armed combat requires “a lot more than 1s and Os.”
Similarly, Army Futures Command Commander Gen. John Murray is also clear that, despite the promise of autonomy and AI-generated information processing and sharing, manned-unmanned teaming and even some unmanned-unmanned teaming will require human operators overseeing, directing or participating in the process with crucial command and control decision-making authority in many respects. It is not an accident that emerging combat vehicles such as the Army’s “Optionally Manned Fighting Vehicle” is, as Coffman explained, intended to be manned or unmanned as required by mission demands. The same may be true of the Air Force’s 6th-Generation stealth fighter and other fighter jets and air force platforms. The same certainly is true when it comes to the fast-expanding fleet of Navy Unmanned Surface Vessels of all sizes, many of which could accommodate sailors, or not, as needed.
This is one reason why, despite the alarming pace at which autonomy and AI-enabled real-time analytics and data organization is progressing, Army futurists are increasingly committed to soldier experimentation with emerging systems. This not only ensures what works and what doesn’t in combat circumstances, but enables humans to offer perspectives and insights only possible through human cognition. The Army calls them “soldier touch points,” instances where new autonomous systems, weapons and AI-enabled technologies are placed in tactical scenarios alongside humans to optimize war performance. This means, for example, a forward unmanned reconnaissance vehicle or ammo-carrying robot will quickly perform a wide range of crucial functions in forward combat, yet still take direction from humans as needed.
The Air Force seems to agree.
“We are so multifaceted as human beings, yet machines will be specialists in certain areas,” Air Force Chief Scientist Victoria Coleman told The Mitchell Institute for Aerospace Studies.
Coleman seems to align entirely with Murray and Coffman’s assessments, stressing that more experimentation is needed to further refine tactics, techniques and procedures.
Recommended for You
“How do we have humans and machines operate together to get better outcomes? Experimentation is already taking place. The way to deploy more of these teams is through more and more experimentation. If we test a little, we can feel comfortable going to war,” Coleman said.
For instance, Coleman addressed the fast developing “loyal wingman” phenomenon, a tactical and technological breakthrough which is already underway through coordinated flights between an Air Force Valkyrie drone and F-35s, for example. Continued progress with efforts to refine operational and strategic specifics will further empower the advantages introduced by having pilots control multiple drone operations from the cockpit of an aircraft.
Also, the Air Force seems to have already figured out that the best warfighting approach is to simply combine AI and human pilots together for aircraft combat operations. The service recently experimented with efforts to fly a manned jet with an AI-empowered co-pilot. Would not be a surprise to see more of this in the future. Earlier this year, an AI-enabled computer algorithm operated on board a U-2 spy plane while in flight, coordinating navigational details, sensor information and reconnaissance missions alongside a human pilot.
The AI algorithm, called ARTUu, flew along with a human pilot on a U-2 Dragon Lady spy plane, performing tasks that would “otherwise be done by a pilot,” an Air Force report explained.
“ARTUu’s primary responsibility was finding enemy launchers while the pilot was on the lookout for threatening aircraft, both sharing the U-2’s radar,” the Air Force report said.
It is described as manned-unmanned teaming, or human-machine interface, a process intended to optimize the best of how computers and humans can perform. The human-computer team flew a reconnaissance mission during a simulated missile strike.
“It is all about experimentation. With software we know how to do that. How do we do that with airplanes? If our goal is to get ahead by fielding rapidly, we need to change our testing infrastructure,” Coleman said.
Perhaps .. at least for now .. the matter is resolved. AI-and the human brain will fight wars together.
Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Master's Degree in Comparative Literature from Columbia University.