By Volreka Senatus, Warrior CyberWar Editorial Fellow
Will Russia deploy terminator-type robots that kill autonomously?
A recent Russian news article sheds light on Russian President Vladimir Putin’s perspective, or lack thereof, on AI and weapons. In a news article from Russia’s TASS news agency, Putin recently said that AI leaders are hesitant to impose restrictions until concrete threats materialize. This stance prompts crucial questions about the trajectory of AI autonomy and robotics on the global stage, especially in military applications as developed and applied by US rivals.
The autonomy of AI systems varies from simple automation to sophisticated self-learning mechanisms. While the Pentagon has in recent years been working intensely on applications of AI and ethics, doctrine and combat operations, there is widespread concern about China and Russia’s reluctance to enforce stringent and ethical regulations when it comes to AI. For years, the Pentagon has maintained its “man-in-the-loop” doctrinal requirement related to autonomous weapons, meaning a human being must make the decision about any use of lethal force. However, with the rapid progress of AI .. what about defensive “non-lethal” force? Should that be AI-enabled? Surely the Pentagon is currently exploring these questions, and the Pentagon has even published “ethical” standards regarding the use of AI. However, what about Russia and China? Is the US military preparing to fight armies of AI-enabled armed robots able to attack and fire weapons without human intervention?
A comparison of perspectives among major AI leaders underscores the lack of a unified stance on restrictions. Understanding Putin’s viewpoint becomes essential in comprehending Russia’s role in the evolving global AI landscape.
Warrior talks with former DoD AI & CyberWar Expert
As AI matures, a critical question arises: to what extent will Russia integrate it into weaponry? Initiatives in mechanical engineering, such as increased tank production, robots, lasers, and armored vehicles, reflect Russia’s ambitious plans in military technology. Despite relying heavily on the physical manpower of soldiers along the frontlines, AI remains a free-floating agent. During an expanded Defense Ministry board meeting, Putin acknowledged the growing interest in AI development and application but failed to consolidate views on ethical evaluation.
Evaluating potential risks and addressing issues like indiscriminate targeting or unforeseen escalation is crucial for responsible deployment, particularly in autonomous warfare machines. In the U.S military, AI is already sophisticated enough to generate weapon recommendations during an attack. U.S militant forces employ Human-in-the-loop restrictions, clearing AI-targeted lethal force only with official human approval. This approach contrasts with Russian military discussions, where ethical considerations for AI applications remain vague.
Analogous to mechanical engineering, modern AI integrations recommend the best-equipped weapons for methodical force in completing tasks or operations. The risks associated with delaying regulatory measures until tangible threats emerge highlight the need for foresight amidst rapid AI advancements.
Despite claims of “weaponry superiority,” both Russian and U.S militaries possess the capability to simulate war. However, the industrial pace becomes a pertinent question when retrospectively assessing the timeline of global affairs. It potentially hampers future Russian advancements, fast-tracking next-generation technology to war, while U.S military forces remain backed by industrial capacity to rebuild armored forces throughout prolonged conflicts.
The key point remains that ethical evaluations are crucial to mitigating unintended consequences of autonomous warfare. These evaluations contribute to transparency in the development and deployment of autonomous weapons, fostering public trust. Open and ethical practices enhance the legitimacy of autonomous warfare technologies.
Moreover, ethical evaluations ensure compliance with international humanitarian law, including the Geneva Conventions, outlining the rights and protections of civilians and combatants during armed conflicts. This commitment to ethical principles contributes to long-term strategic stability, preventing destabilizing arms races in the evolving landscape of autonomous weaponry.
Volreka Senatus, Warrior CyberWar Editorial Fellow, is a cyber assistant educator at Florida International University
””’ Warrior Maven President Kris Osborn contributed to this story******