By Johnny Franks, Warrior Editorial Fellow
The US military’s integration of AI and autonomous weapons systems is a dynamic and intricate process, marked by significant technological advancements and strategic enhancements. This progress necessitates a careful consideration of evolving policies, ethical implications, and strategic objectives, underscoring the delicate balance between embracing cutting-edge technology and addressing the profound ethical and policy challenges it presents.
In terms of specific applications of these technologies, two key areas stand out. First, the field of network-based warfare, in which the Joint All-Domain Command and Control (JADC2) system is a major focus. JADC2 aims to link all military areas (land, air, sea, space, and cyber) by connecting sensors and shooters. This system collects and processes huge data from many sources, like satellites, drones, ground sensors, and ships, using advanced AI. It’s designed to analyze data in real-time and share it, improving awareness and decision-making speed. This network allows better and faster response to threats, using AI for quick analysis and sharing of important information. Second, autonomous weaponry is a crucial element of the U.S. military’s shift toward AI. These can work with different levels of human control. Some, like drones or unmanned vehicles, conduct surveillance and intelligence gathering on their own. Others, like the “loyal wingman” program, work with manned systems. In this, autonomous drones team up with manned aircraft for tasks like scouting or engaging targets under certain conditions. These are programmed for specific missions and can make decisions like navigation and target identification on their own.
In terms of tangible policy that addresses future developments, there already exists an evolving framework of intricate guidelines and principles, echoing a commitment to responsible and ethically grounded utilization of AI. The 2023 AI Adoption Strategy by the DoD, released in November 2023, provides a roadmap for integrating AI to maintain strategic and tactical superiority. This document is an intricate blueprint for AI deployment, emphasizing agility, rapid adoption, and a comprehensive focus on various operational dimensions, from battlespace awareness to enterprise operations. The strategy is a mosaic of objectives and principles, underpinned by a commitment to responsible AI development and safety protocols. DoD also recently updated its Directive 3000.09, which oversees the development and use of autonomous and semi-autonomous weapons systems, though one official “stressed that the update to the directive is ‘a clarification, not a major change.’”
The Biden administration has advocated for a nuanced approach that delicately balances technological advancement with humanitarian and ethical imperatives. This duality is mirrored in the international arena, where the US stance has catalyzed a variety of reactions, particularly in the context of UN discussions on autonomous weapons and the concept of “meaningful human control.” The dynamic of this discourse is further complicated by the evolving military landscape, as evidenced by the use of AI-enabled autonomous systems in conflicts like Ukraine. This international dialogue is not just about technological advancement but also about the ethical, legal, and strategic implications of AI in warfare.
Johnny Franks holds an MA in U.S. Foreign Policy & National Security from American University and a BA in Diplomacy & World Affairs from Occidental College. With a specific interest in geopolitical security and military technology, Johnny has primarily focused his research and analysis on the Russia-Ukraine conflict from 2014 onwards. . As part of his MA coursework, Johnny contributed to developing an Arctic defense strategy in partnership with the U.S. Department of Defense.