By Johnny Franks, Warrior Editorial Fellow
Are we ready for a future where machines decide who lives and who dies? Lethal Autonomous Weapon Systems (LAWS) have been at the forefront of discussions and initiatives by the United Nations (UN) to address growing concerns. These systems, characterized by their autonomy with varying levels of human oversight, have become more sophisticated, and as technology advances, artificial intelligence (AI) is being applied for more efficient operation. The UN Secretary-General, António Guterres, called LAWS “politically unacceptable and morally repugnant,” advocating for their prohibition under international law and recommending the conclusion of a legally binding instrument by 2026 to prohibit LAWS that operate without human control or oversight and regulate all other types of autonomous weapons systems.
Attack Robots, Autonomous Weapons, Drones & Future of AI
Read Warrior’s Special on AI, Robotics & Weapons HERE
The US military is at the forefront of integrating autonomous technologies into its operational capabilities, and there is a general trend among global powers to employ such systems. DoD policy directs that an appropriate amount of human judgment must be included in using force by autonomous and semi-autonomous weapons systems. However, there is an evident push towards developing systems with ever-diminishing degrees of human control. On the other hand, the U.S. Army’s long-term strategy involves the phased integration of robotic and autonomous systems (RAS) in the combat force, beginning with unarmed, unmanned utility vehicles and trucks, moving to armored robotic vehicles with increasing autonomy. This strategy seeks to enhance operational effectiveness while mitigating risks to human soldiers.
The US military’s move towards autonomy in weapon systems includes all service branches. The Navy is developing and test-running prototype systems like Sea Hunter and unmanned underwater vehicles (UUVs), having the capability to perform extended autonomous operations. The incentives for such efforts include cost-effectiveness and the reduction of risk, all to send out a more challenging target to an enemy. The Air Force, on the other hand, is also developing unmanned combat drones that can operate autonomously, especially in situations where communication with human operators might be impossible.
This transition towards more autonomous systems challenges and contributes to the debates on the issue. The global race for autonomy in military technologies, driven by strategic competition among major powers, brings out several ethical, legal, and security implications. The challenge of controlling the risks arising from accidental engagements and escalation, accompanied by the moral ramifications that come along with the progressive dismantling of human involvement in life-and-death decisions, are some of the pressing concerns that must be addressed. Through the international community, via conferences like the UN, the world is coming to grips with this challenge to establish norms and rules that strike a balance between technological advancement for the sake of the defense and holding in check the humanitarian principles, ensuring compliance with international law.
Johnny Franks holds an MA in U.S. Foreign Policy & National Security from American University and a BA in Diplomacy & World Affairs from Occidental College. With a specific interest in geopolitical security and military technology, Johnny has primarily focused his research and analysis on the Russia-Ukraine conflict from 2014 onwards. As part of his MA coursework, Johnny contributed to developing an Arctic defense strategy in partnership with the U.S. Department of Defense