By Olawale Abaire, Warrior Editorial Fellow
The Defense Advanced Research Projects Agency (DARPA) is working to refine and advance applications of artificial intelligence (AI) that the Defense Department can increasingly rely upon by flying AI-enabled F-16s. This concept, often referred to as zero- trust, aligns with efforts to increase the reliability of AI to ensure it can perform as needed across a wide range of contingencies. A key portion of DARPA’s focus on trustworthiness is about creating systems that perform well and ensuring that these systems can be relied upon to make decisions that align with human intent and ethical considerations. This is particularly crucial when AI systems are tasked with making recommendations that could lead to life-or-death outcomes.
One of the reasons AI development is such a priority is to prevent an unexpected breakthrough in technology, or “strategic surprise,” by enemies who might also be developing advanced capabilities. DARPA aims not only to prevent such surprises but also to create its own strategic surprises.
DARPA has long been at the forefront of technological innovation, particularly in the defense. One of their most critical objectives today is the development of artificial intelligence (AI) and autonomous systems that warfighters can trust implicitly, especially when lives are at stake.
Matt Turek, the deputy director of DARPA’s Information Innovation Office, emphasized the importance of this goal during a recent event at the Center for Strategic and International Studies.
To accomplish these goals, DARPA is actively seeking transformative capabilities and ideas from industry and academia. One of the ways the agency acquires these capabilities and ideas is through various types of challenges where teams from the private sector can win prizes worth millions of dollars.
Top Pentagon AI & CyberSecurity Official Talks About Robotics with Warrior
An example of this is DARPA’s Artificial Intelligence Cyber Challenge, which uses generative AI technologies, like large language models, to automatically find and fix vulnerabilities in open-source software, particularly software that underlies critical infrastructure.
These large language models involve processing and manipulating human language to perform tasks such as secure computer coding, decision-making, speech recognition, and making predictions. A unique feature of this challenge is the partnership between DARPA and state-of-the-art large language model providers that are participating in the challenges, including Google, Microsoft, OpenAI, and Anthropic.
Most likely, improvements in large language models will also benefit the commercial sector, as well as the Department of Defense (DOD). An example of the use of autonomy and AI that DARPA has been testing with the Air Force involves its F-16 fighter jets. The agency has been exploring the use of AI to control these aircraft, pushing the boundaries of what autonomous systems can achieve in complex, dynamic environments.
DARPA’s focus on developing trustworthy AI and autonomy applications for warfighters is a critical endeavor that holds significant implications for both defense and commercial sectors. Through strategic partnerships and innovative challenges, DARPA is pushing the boundaries of AI and machine learning, paving the way for a future where AI can be trusted with life-or-death decisions.
DARPA’s approach to developing trustworthy AI is both necessary and commendable. The emphasis on collaboration and open challenges is a strategic move that accelerates innovation and brings diverse perspectives to the table. However, the path to creating AI systems that can be trusted in life-or-death scenarios is fraught with challenges.
One of the most significant hurdles is the “black box” nature of AI, where the reasoning behind decisions made by machine learning models is often opaque. This lack of transparency can undermine trust, making it difficult for operators to rely on AI recommendations without reservation. DARPA’s efforts to create more interpretable and explainable AI models are crucial in this regard. Moreover, the potential for strategic surprise underscores the need for continuous vigilance and investment in AI research and development.
OLAWALE ABAIRE is a researcher, writer and analyst who has written many nonfiction books, He has master’s degree from Adekunle Ajasin University, Nigeria. He also works as a web content writer with the International Lean Six Sigma Institute, UK