Video Above: U.S. Army AI Uses Human Brain as a Combat "Sensor"
If a forward-operating drone suddenly recognized an entire mechanized column of enemy tanks emerging from wooded areas on the back side of a mountain, and the force was in position to close in on allied ground positions ill equipped to respond or counterattack, survival and potential victory in war would rest upon several key variables … such as speed.
- How fast could video images of the approaching tanks reach human decision-makers in ground control centers?
- How quickly could the 1 minute of video showing the emerging tanks be found from within hours and hours of drone video feeds?
- How can the tanks be identified and located in terms of movement, terrain, weapons or angle of approach?
- Can the newly arriving sensor information be processed and transmitted?
- Should the target detail reach other drones, nearby allied ground forces or even fighter jets in range to launch air attacks on the tanks?
Yes .. to all of the above.
Artificial Intelligence & Data Processing
In order to keep forces alive and preserve an opportunity to prevail in war or even counterattack, processed information would need to reach the right places in position to respond … immediately.
Instead of needing to rely solely upon point to point connectivity between a drone and a single ground control station when it comes to receiving and processing actionable warfare intelligence, what if armored vehicles, ground artillery, fighter jets or even dismounted forces in position to respond could be informed instantly?
Perhaps they could receive targeting specifics, complete with threat details, navigational information, geographical specifics and data on a target's speed of approach and anticipated attack point?
Instead of slower, more segmented one-point to one-point information transmission, which is itself subject to significant latency concerns, what if all of the crucial detail needed to stop the attack could instantly be identified, analyzed and networked across an entire force?
An ability to network time-sensitive targeting and intelligence information across a massively dispersed force of integrated combat “nodes” in real time, such as that which would be needed in these kinds of scenarios, is exactly what the Pentagon and U.S. military services are working toward accomplishing with Artificial Intelligence and data processing.
A Sense of Urgency
“Sensor to shooter time, ”warfare'' at the speed of relevance,” improving “situational awareness” and utilizing a “kill web” are all terms used to describe the need for this kind of high-speed, data driven warfare. The success and promise of AI is why the Pentagon wants to bring it to the edge of war …. Right now. Immediately.
"Teams will go out within the next 90 days to every single combatant command and start to tie in their data, and they'll also have technical expert teams on AI and they'll start looking at how to bring AI and data to the tactical edge in support of the warfighter," Kathleen Hicks, Deputy Secretary of Defense, said in a Pentagon report.
The Beauty of AI
The task of truly bringing AI to the very edge of war is not without challenges. The unprecedented breakthrough advantages offered by AI, which are already redefining warfare, collide with the sizable and difficult challenge of simply having too much information. How can critical points of relevance, such as time sensitive warfare decisions, be identified quickly amidst vast, endless volumes of collected ISR data? Hours of video, thousands of images and millions of data points need to be organized, tailored, streamlined and transmitted to the right place as needed, within seconds, for a force to prevail in war.
This is the beauty of AI, as it can use computer processing speed and advanced algorithms to “flag” the crucial five seconds within five hours of video surveillance. Data can be compiled, analyzed, processed and bounced off of and compared against a seemingly limitless database in milliseconds. Using AI and computing speed to perform crucial procedural, organizational and analytical functions can free up human decision-makers to leverage their best attributes.
“Customers are overwhelmed by data today…we have to help them get out of the data loop. And on this theme, the thing that the human being does do well, and that's using intuition, that kind of advanced thinking, to figure out what to do next, you need to make sure they're solidly in that domain, and not having to do with what machines can do,” Jim Wright, Technical Director, for ISR Systems, Raytheon, told Warrior Maven in an interview.
Processing Exploitation and Dissemination (PED)
The process of gathering, organizing, analyzing and transmitting new sensor data, often referred to as Processing Exploitation and Dissemination (PED), has taken on new significance as AI-enabled information sharing continues to expand across platforms and domains to massively shorten “sensor-to-shooter” time and help attackers stay in front of or ahead of enemy decisions. Doing this effectively requires an ability to perform high-speed data organization, something which is itself being enabled by AI.
An emerging Raytheon technology called Cognitive Aids to Sensor Processing, Exploitation and Response, CASPER™, is intended to help with this as it employs advanced algorithms to process and integrate otherwise disparate pools of incoming sensor data, and properly categorize, prioritize and streamline information as needed.
“Much like talking to Alexa or Siri, an operator tells CASPER to scan for fast boats and prioritize by threat to the carrier,” Wright said. “CASPER then takes control of sensor functions, rapidly identifies which boats are threats based on things like their appearance and behavior over space and time, and provides the operator with the threat list and recommended courses of action.”
Raytheon developers said CASPER automation-focused data processing software Is, among other things, being applied to Multi-Spectral Targeting Systems now operating EO-IR sensing on drones and other combat platforms.
“The Casper system is our deployment of intelligence within individual sensor systems, and then there's another layer above that, where we automate, automate ISR at the edge on platforms with multiple sensors,” Wright said.
Having all the information in the world available is essentially useless, if it cannot be accurately processed, identified, organized and transmitted. Most of all, the arriving information has to be accurate, which is why the Pentagon and several of its industry partners are taking technical steps to improve the reliability and resiliency of AI.
“AI systems are trained and they're limited within the scope of their training. They're only as effective as the database they're operating with. How do you get enough data and train these algorithms with the right data so they can do the job you want them to do?” Wright said.
This is where the concept of zero trust or reliable AI comes in, as an AI database, no matter how vast, is only as effective as the data it has and is trained upon. This means that should an AI-empowered system come across something it has not seen before and no defined basis from which to make a comparison or assessment, advanced algorithms could be confused or prone to error.
“This has been the subject of DARPA research and other research that we are looking carefully at where the AI says ‘I've never seen this before, I'm not going to try to classify it. That's where you make a real mistake.’ So having your systems be smart enough to recognize that the data presented to it is not within its training set,” Wright.
Much of this pertains to analytics and performing a contextualized integration of new data as it arrives. For example, an AI system can discern the meaning of “ball” as in dance from the meaning of “ball” as in football due to an ability to analyze surrounding words and determine context. However, just how fast can essential elements of machine learning take plane? Can some of it happen in “close-to-real” time during the analytics process as new sensor data arrives? That would certainly be a hope, yet perhaps algorithms can merely be trained to avoid incorrectly “classifying” or “identifying” something and simply not analyze something it does not know. That could present complications as well, as data or indicators that are merely ‘unidentified” could complicate if not derail accurate interpretation.
For example, since algorithms are trained to recognize specific things such as armored vehicles and discern them from a host of surrounding variables such as weather, terrain and nearby objects, a potential adversary might, for instance, simply place a “poster” or board or some kind on top of tank to essentially “spoof” the AI-capable algorithm from identifying it and cause it to misinterpret data, make an error or even receive inaccurate data. The Pentagon and industry are now working on ways to overcome or “counter” these countermeasures.
Developing “resilient AI” is what Pentagon and Raytheon engineers, scientists and weapons developers are working on, a method of building in redundancies and error corrections into sensor processing systems or improving the precision and accuracy of the analytics process. Yet another strategy can involve efforts to engineer algorithms increasingly capable of analyzing a host of variables in relation to one another at one time, to generate a more holistic picture, rendering or interpretation not narrowly confined to one specific indicator.
For example, Wright explained that putting small pieces of tape on a stop sign can cause an AI algorithm to incorrectly classify it as something else. In this case, interpretive identification technologies need to be programmed to avoid false identifications and simply indicate an inability to classify.
“How do you get the right data to the right operator at the right time when you're at the battlefield edge? You don't have tons of storageand racks of equipment. You've got one little computer there, and you're on an armored vehicle or a helicopter. There needs to be cognitive data agents running in the network saying ‘okay, commanders have told us this is our mission so we need to preposition needed data to this set of vehicles so they can fight through potential network blackouts.”
At the same time, while massive increases in sensor ranges, data-sharing and long-range connectivity will continue to bring as-of-yet unprecedented advantages to warfare operations, there are also challenges that emerge as combat becomes more networked. Information needs to be properly scaled, layered and segmented as needed, Wright said. Referring to this phenomenon as creating clusters of embedded intelligence, surveillance, and reconnaissance (ISR), the NATO Joint Air Power Competence Center paper warns of security risks of such “hyper-connectivity.”
New much-longer range sensors and weapons, incorporating emerging iterations of AI, are expected to make warfare more disaggregated and much less of a linear force-on-force type of engagement.
Such a phenomenon, driven by new technology, underscores warfare reliance upon sensors and information networks. All of this, naturally, requires the expansive “embedded ISR” discussed by the NATO paper. Network reliant warfare is potentially much more effective in improving targeting and reducing sensor-to-shooter time over long distances, yet it brings a significant need to organize and optimize the vast flow of information such as a system requires.
“Not everybody in the network needs to see and hear everything. There needs to be a hierarchy, and a backup architecture for degraded network operations,” the paper writes.
Wright described this in terms of a need to properly “layer and scale” data particular to a given set of mission variables, platform technologies and likely items of relevance. The idea is to leverage data processing to exploit, organize and streamline data in the most optimal way for human decision makers. Wright explained the phenomenon in terms of not needing to transport massive servers of data around a combat zone but rather properly streamline necessary available data tailored within needed parameters.
“OMFV has a mission within the context of other known entities. So I'm going to have a certain amount of AI running inside it that's appropriate to its mission and intent. And I'm only going to give it enough data for it to carry around within its constrained space to achieve that mission,” Wright said.
Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Master's Degree in Comparative Literature from Columbia University.