Katherine Owens, Fellow, Center for Military Modernization
The Paradoxes of Nuclear Weapons and Artificial Intelligence
Over the past year, the latest developments in artificial intelligence (AI) have led to a surge of ominous comparisons between AI and nuclear weapons. However, comparing the rise of AI technology to that of nuclear weapons misses the point. AI’s true impact on military capability and global security will be determined by its convergence with nuclear technology.
To be clear, AI bots are not going anywhere near the nuclear launch buttons. This was articulated in the recommendations of the National Security Commission on AI formed in 2019 and the 2022 Department of Defense Nuclear Posture Review, and re-affirmed in the National Defense Authorization Act for fiscal year 2024. Several senators recently introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act to ensure this policy is codified in the law.
This does not mean that military technology and AI technology are being kept separate. To the contrary, military officials, researchers, and defense industry leaders are planning for AI’s increasing role in the military and in weapons systems, even nuclear ones.
So, what does this role look like now and how will it grow?
The two main areas where AI fits into military systems are: 1) Command, control, communications (C3) systems and 2) Strategic security and wargame simulations. These two categories have already seen AI enhancements and are the primary focus of further integration.
AI’s Role in NC3 Systems and Strategic Simulations
What is AI?
A single definition of AI technology is difficult to find. One group writing for the International Journal of Scientific Engineering and Research states that AI machines “respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” Darrell M. West and John R. Allen of the Brookings Institute argue that AI is defined by three characteristics: “intentionality, intelligence, and adaptability.” These definitions, though accurate, tend to fuel visuals of thinking and feeling computers rising to compete with humans. It is more useful for understanding the role that AI can play in our current weapon systems to think of AI as a massive-scale data processor. The sheer amount of data that AI can process gives it a level of nuance or “intelligence” that comes closer to mirroring human logic than any previous computing technology.
According to currently available information, there are two levels of AI technology being used in the military context. The first is rules-based computing. The human warfighter creates “if-then” rules for the AI program so that it can generate responses upon receiving certain inputs. The second involves AI programs that use machine learning ability to process millions of data points to identify and then “learn” patterns. The more data the program has access to, the better it can compare and evaluate scenarios. At its core, machine learning by AI is probability-based, it can assess the likelihood that an already-identified pattern or event is repeating, and then “store” certain patterns when required. This is how facial and voice recognition software can work, for example.
How Does AI Apply to Nuclear Weapons?
AI plays an enhancing and supporting role in the command, control, and communications for nuclear weapons systems (NC3). It does this primarily through contributing to the situation assessment, response development and evaluation, and force direction steps of the command–control infrastructure. Most nuclear-capable weapons systems rely on sensory data to gauge conditions, map terrain, assist with guidance and navigational awareness, and identify targets, among other functions. AI systems improve the accuracy and amount of this information. The AI technology can receive what the Massachusetts Institute of Technology calls “structured data” from satellites, reconnaissance UAVs, buoys, and other sources, as well as “unstructured data” from internet sources and perform “data conditioning.” Data condition readies the information for the machine learning process, which can be either supervised or unsupervised learning or reinforcement learning.
As previously mentioned, machine learning is probability based and its power lies in its ability to use vast amounts of data to identify patterns and causal connections that human logic cannot reach with our limited informational capacity. These patterns and probability assessments are then reviewed by human personnel. This highlights one fundamental advantage of AI in the NC3 space: it increases the speed and accuracy of data collection and processing so that the human warfighters have more time and resources with which to make their decisions.
“We can take large pieces of terrain and rapidly identify hundreds of targets, prioritize them based on a high priority target list that determines which ones we should strike with the resources that we have,” said Lt. Gen. Michael E. Kurilla, commander of the XVIII Airborne Corps, testifying before the Senate Armed Services Committee in February 2022. “That happens in seconds versus what would take hours normally, or sometimes even days to be able to develop these targets.”
A specific example of this type of AI integration is the Air Force’s PreVAIL prediction and detection system. The system uses existing data on traffic patterns, road maps, and driving speeds to track vehicles through predicting their locations. In other words, once a vehicle is identified, the AI system uses “learned” information to predict all possible locations the vehicle could be and directs sensory resources there. According to the Air Force Research Laboratory, one benefit of this system is that it is “sensor agnostic” because it relies on existing data rather than real time input from sensors that may be sparse in many areas.
In terms of applicability to nuclear weapons, it is possible that a similar technology could be integrated into nuclear payload-compatible bombers and use flight pattern and weather condition data for its input, for example.
Another example of AI integration comes from C3 AI, a civilian-focused technology firm. C3 AI developed AI technology that uses data from thousands of inventory checks, flight histories, and sensors for an Air Force bomber aircraft, potentially including those equipped for nuclear payloads, to predict where and when maintenance is needed.
“We can look at those data and we can identify device failure before it happens, fix it before it fails and avoid unscheduled downtime,” C3 AI CEO Tom Siebel told the BBC last year.
AI’s ability to identify logic patterns that are obscure to the human mind is also being applied to strategic simulations and war games. AI technology can draw on “learned” patterns to generate new scenarios within parameters set by the orchestrators. This can inform large-scale strategic exercises as well as allow operators to rapidly simulate and evaluate the effects of an imminent missile launch. Both the ability to simulate real-time effects and the ability engage with an infinite number of practice scenarios could have transformative effects on United States military capability.
Obstacles to Nuclear–AI Integration
The continued integration of AI technology and nuclear weapons technology faces three main obstacles. The first two derive from the nature of AI itself and the final obstacle centers on global security.
The Human Involvement Paradox
AI technology still has several deficiencies when it comes to use in defense systems, particularly nuclear ones. First, there is the problem of bias. In their paper for the Nuclear Threat Initiative, Jill Hruby and
M. Nina Miller describe that in the AI context, biases tend to emerge from either over-trust or under-trust of AI-derived results. They refer to the former as “automation bias” and the latter as “trust gap” bias and note that those working with the AI technology may unconsciously allow these biases to impact their instructional or rule-making inputs into the AI algorithms. Even so, politicians and military leaders remain committed to human involvement.
The AI Data Paradox
Another problem that is more unique to nuclear–AI integration is the scarcity of data. As described previously, AI’s accuracy and extra-analytical pattern recognition abilities are possible largely because of the amount of data that AI technology can process. That accuracy and predictive capability decrease as the available data decreases. The fact that nuclear launches are relatively rare and attacks non-existent is fortunate but means that data for the AI algorithms to work with is extremely limited. When there is no data for AI systems to draw on regarding nuclear strikes, it is difficult for the technology to “learn” all the various forms and patterns to predict or identify an impending nuclear strike. In other words, the lack of past nuclear strikes inhibits the military’s use of AI to protect against future nuclear strikes.
The Air Force has begun to address this problem by developing capabilities such as Performance Estimation for Multi-Sensor ATR (PEMS) and Multi-Aperture Reduced-scale Verification and Evaluation Lab (MARVEL). Simply put, PEMS helps generate synthetic data and MARVEL uses sensors and small-scale models to generate scalable data. These manufactured or simulated data points are then used by systems such as Multi-INT ATR for Geospatial Intelligence Capabilities (MAGIC), which is capable of synthesizing data across multiple modalities to generate effective target recognition capability.
Using synthetic data and data from strategic simulations will have to suffice for AI technology in the nuclear weapons system arena. However, it is not yet fully understood what effect reliance on simulated or synthetic data will ultimately have on AI’s machine learning capabilities.
The AI–Nuclear Global Security Paradox
The geopolitical obstacles that AI–nuclear weapons integration faces are born from the same qualities that make AI a valuable defense tool. AI and its machine learning capabilities improve accuracy and speed, giving operators more time to make informed decisions. Improved human risk assessment and decision-making capabilities should stabilize dynamics between the nuclear powers. However, experts warn it may have the opposite effect.
For example, AI-supported detection is thought to be a particularly effective tool against hypersonic missiles. As a result, there is concern that Russia, which has touted its hypersonic weapons development in recent years, will see the AI advantage in this area as a direct threat to its defensive capabilities.
AI may also have a de-stabilizing effect on the smaller nuclear powers because they fear AI-enhanced nuclear readiness and responsiveness will nullify their ability to use second-strike capability as deterrence. Some experts are concerned that when these nuclear powers feel they cannot rely on second-strike deterrence, they will adopt first-strike postures in an effort to counteract the AI advantage. The more nuclear powers with a strategic directive to strike preemptively, the higher the likelihood of nuclear war.
What Does the Future Hold for Nuclear Weapons Systems and AI?
Ultimately, the relationship between AI technology and nuclear weapons is defined by paradoxes. AI systems are most effective when they are freed of human biases, but the need to keep human decision-making in nuclear weapons operations is firmly entrenched. The more data that AI systems can draw on, the more accurate their warnings and predictions, but there is almost no data on nuclear attacks and the goal is to keep it that way. Finally, by providing military personnel and state officials with more information, faster, AI should reduce the risk of mistakes and increase nuclear power stability. Instead, it is often seen as a force-enhancing threat, leading other nuclear powers to adopt more volatile nuclear postures.
Despite these contradictory themes, AI in military technology is here to stay. China, Russia, and the United States are all forging ahead on AI development and integration. In the United States, the FY24 NDAA includes a call for the Department of Defense to update its “plans and strategies for artificial intelligence” and names: “(A) Automation, (B) Machine learning, (C) Autonomy, (D) Robotics, (E) Deep learning and neural network, and (F) Natural language processing” as priority areas.
In his testimony before the Senate Armed Services Committee Subcommittee on Cybersecurity in April of this year, Dr. Josh Lospinoso said, “We must act now to prepare our major weapon systems for the era of AI.” We cannot fully know what the “era of AI” will hold for the defense and nuclear community, but there is no doubt that it will impact every nuclear warhead and every warfighter.
Katherine Owens is a Resident Fellow at the Center for Military Modernization. She has a BA from George Washington University and a Masters Degree from Columbia University in International Affairs. She is also an expert on nuclear weapons and is currently pursuing a Law Degree at Penn State.