Video Above: DARPA Approaches Massive New AI, Machine Learning "Breakthrough"
By Kris Osborn - Warrior Maven
The Defense Advanced Research Projects Agency is pursuing an unprecedented machine-learning “breakthrough” technology -- and pioneering a new cybersecurity method intended to thwart multiple attacks at one time and stop newer attacks less recognizeable to existing defenses.
A DARPA-led “Lifelong Learning Machines” (L2M) program, intended to massively improve real-time AI and machine learning, rests upon the fundamental premise that certain machine-learning-capable systems might struggle to identify, integrate and organize some kinds of new or complicated yet-to-be-seen information.
“If something new is different enough, the system may fail. This is why I wanted to have some kind of machine learning that learns during experiences. Systems do not know what to do in some situations,” Hava Siegelmann, DARPA program manager at the Information Innovation Office and Professor of Computer Science at the University of Massachusetts.
The goal of the emerging high-tech program could be explained in terms of immediate “real time training.” If machines learn even the most difficult or ambiguous things while performing analysis in real time, then, as Siegelmann explains it -- “we are not bound to the training set.(previously compiled or stored information). We Put old data and new data all together to retrain the network on all the training data.”
While cutting edge AI applications are now showing a fast-increasing ability to understand context, complex nuances and even some highly subjective variables, many machines can struggle in some instances to properly integrate data it has not yet added to its database. Machine learning can recognize anomalies and patterns not part of its historical volumes of collected information, yet there are some kinds of fast-emerging, unexpected developments which can present great difficulty for some state of the art machine-learning systems.
“In many instances, should new information be introduced to a machine learning program, the system will at times “not know how to recognize a new image. If you learn all the time, you will not have as many surprises,” Siegelmann said.
Broadly speaking, AI works by comparing new input against a database of known information to discern margins of difference, make calculations and determine answers to seemingly unsolvable or extremely complex problems. Given advanced processing speeds, combined with an ability to perform real-time analytics, seemingly limitless volumes of data can be mined in an almost immediate fashion - providing answers and organizing information for human decision makers.
Advanced machines, for example, can discern context and recognize the difference between a “ball” that is a dance - and a “ball” that is a football. This is done by analyzing surrounding words, organizing them and effectively determining context or meaning. For instance, Siegelmann said that advanced pattern recognition enables AI to know the difference between many complicated or tough-to-discern words, images and programs - such the difference between planes and tanks or more closely related images. The L2M program is designed to build upon this, bringing these technical advantages to an entirely new level.
As Sigelmann explained, there are certain kinds of never-before-seen nuances or data permutations which represent a departure from what a machine-learning can typically analyze. Also, there also appear to be some limits to AI, meaning it may not yet have an ability to fully digest and assimilate some very subjective variables such as “feelings”...”instincts”... certain kinds of nuanced decision-making uniquely enabled by human cognition… or anything which is not compatible with computer algorithms, mathematical formulas or some purely scientific methods of analysis. Conversely, it can also be said that by drawing upon databases including things like speech patterns, prior behavior and other kinds of catalogued evidence, AI is now on the cutting edge of being able to handle much more subjective phenomena, according to some industry computer scientists.
Interestingly, LM2 has some conceptual parallels to human biological phenomena, Seigelmann explained. Advanced synergy between input and output, in real time, is analogous to how a baby apprehends its surroundings, she said.
“When a baby is born it is learning all the time to adapt and learn all the time. People are afraid of surprises.This is precisely the point; the faster a machine is able to absorb and process new information by instantly adding it and synchronizing with its existing database, the faster it can train to recognize and compute new things” Siegelmann added.
Exploring biology as it pertains to creating new computer algorithms is by no means unprecedented. Pentagon scientists have long been immersed in something called “biomemetics” wherein flocking patterns of birds and bees are analyzed as a way to develop new algorithms for drones -- enabling them to coordinate integrated functions, swarm accurately or operate in tandem without colliding.
Guaranteeing AI Robustness Against Deception - GARD
Alongside the ongoing L2M effort, which is progressing quickly, Siegelmann also emphasized a related, yet distinct cybersecurity-oriented exploration geared toward thwarting cyberattacks far more advanced than what typically take place.
The cybersecurity concept, called Guarantee AI Robustness Against Deception, is designed to understand a new kind of more sophisticated cyberattack and, as Siegelmann put it, “make machine learning more sensitive and make AI more robust and resilient.”
The GARD program is engineered to address emerging methods of attempted intrusion engineered to “spoof,” “confuse” or re-direct the machine-learning oriented system it is attacking.
“This kind of attack could involve a particular algorithm designed to send something to the machine learning system and actually send something to cause the AI to respond in a way that would not be expected...essentially confuse and trick the machine to force it to make a decision,” Siegelmann said.
Should such an attack be successful, for instance, an attacker could instruct an AI-enabled system to “allow access” to a protected network and “open a door” as Siegelmann put it.
Siegelmann explained it in terms of a certain simultaneous synergy between input and output. The approach enables cybersecurity to identify track and thwart a broader range of attacks than is currently possible, a DARPA official said.
“Current defense efforts were designed to protect against specific, pre-defined adversarial attacks and, remained vulnerable to attacks outside their design parameters when tested. GARD seeks to approach machine learning defense differently,” the DARPA official explained in a written statement.
While based on science, the GARD effort is very early on. Having just sent out a Broad Area Announcement to industry to solicit input, DARPA plans to formally launch the program by December of this year.
“We will be making AI better to create defenses so existing machine learning will be defendable, by either defending the current one or making new machine learning,” Siegelmann added.
Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army - Acquisition, Logistics& Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He also has a Masters Degree in Comparative Literature from Columbia University.