

Dario Amodei, President and CEO
Anthropic PBC
San Francisco, CA
- From Ret. Lt. Col. Scott Rutter -
Dear Mr. Amodei,
I write to you as a former combat officer who commanded a battalion of over 900 soldiers with 2-7 Infantry during Operation Iraqi Freedom I. My experience in war has given me perspective on both the immense weight of decisions involving lethal force and the critical importance of ensuring that advanced capabilities remain in the hands of accountable, democratic institutions rather than those who would use them without restraint or moral framework.
Before I address my concerns, I want to acknowledge what you and your team have accomplished. Anthropic has created truly remarkable technology that stands among the most significant achievements of our time. Your work has the potential to advance medicine, education, scientific research, and countless other fields that will improve human lives. The systems you have built represent not just technical excellence, but a thoughtful approach to AI development that prioritizes safety and beneficial outcomes. This is exactly the kind of innovation that makes our nation stronger and our world better.
I recognize that Anthropic is a corporation with the legal right to choose its customers and partners. Unlike defense contractors bound by existing agreements, you have no obligation to work with the Pentagon. This freedom of choice, however, comes with a corresponding weight of responsibility. When a company develops technology of such significance that its presence or absence in our national defense infrastructure could alter the balance of power between democracies and authoritarian regimes, the decision of whether to provide it becomes more than a business choice, it becomes a decision with profound implications for national security. You have every right to refuse, but I would argue that exercising that right, while legally permissible and perhaps personally comforting, would effectively make a unilateral determination about American military capabilities that should rest with elected officials and military leaders, not with a private company. By withholding your superior technology, you would not prevent the military from using AI, you would simply ensure they use inferior versions while our adversaries suffer no such disadvantage.
It is precisely because your work is so vital and your technology so advanced that I believe it must be available to protect the United States and preserve the free world that allows such innovation to flourish. I understand Anthropic has concerns about Department of War applications of your language models, particularly regarding surveillance and autonomous weapons systems. While I deeply respect the ethical considerations driving these concerns, I believe your company faces a choice similar to one confronted by Albert Einstein in 1939, and that the right path, though difficult, is to engage rather than withdraw.
Einstein, a lifelong pacifist, agonized over signing the letter to President Roosevelt warning that Nazi Germany might develop atomic weapons. He knew his involvement could lead to unimaginable destruction. Yet he also understood that refusing to act would not prevent the technology's development, it would only ensure that those without moral constraints would develop it first. Einstein later called this "the one great mistake of my life," but history suggests otherwise. The alternative, a nuclear-armed Nazi regime facing no deterrent, is almost unthinkable. His choice was not between development or no development, but between development by a democracy accountable to its people and development by a totalitarian regime.
You face a parallel decision. The technologies you have developed will be used for military and intelligence purposes, if not your models, then those of your competitors, potentially with fewer safeguards and less sophisticated capabilities for responsible deployment. The question is not whether AI will be integrated into national security operations, but whether that integration will be guided by organizations committed to safety, or whether you will force the Pentagon to rely on less capable alternatives while our adversaries forge ahead.
Having seen combat, I understand the gravity of autonomous systems and enhanced surveillance capabilities. I have given orders that put lives at risk. I have seen both the necessity of decisive action and the catastrophic consequences of poor intelligence or precipitous decisions. It is precisely this experience that leads me to believe the US military, with all its imperfections, should have access to the most advanced tools available, and that means your technology, not inferior substitutes.
The US military operates under civilian control, congressional oversight, established rules of engagement, the Uniform Code of Military Justice, and public accountability. Our service members are trained in the Laws of Armed Conflict. We have robust institutional frameworks for making life-and-death decisions that have been refined over centuries. We are not perfect, no human institution is, but we have structures for restraint and accountability that adversarial states demonstrably lack. Would you prefer that China or Russia achieve AI superiority in military applications while the US falls behind because its most capable AI companies refused to provide their technology?
If it provides comfort, you can seek written assurances from the Department of War that your technology will be deployed within their existing ethical frameworks and oversight mechanisms. The Pentagon already operates under extensive legal and ethical constraints, from the Laws of Armed Conflict to DoD directives on responsible AI use. These frameworks have successfully governed the introduction of every advanced military technology for decades.
It is not your responsibility, nor should it be, to oversee implementation or second-guess operational decisions. The military has professional standards, legal frameworks, and accountability systems that have guided the use of every advanced technology from radar to precision-guided munitions. Your technology should be no different. Trust the institutions that have defended this nation to continue operating within the bounds they have established and refined over generations of service.
It is worth remembering that partnerships between private innovation and government research have produced some of the most transformative technologies in history. The internet on which your company and the entire modern economy depend began as ARPANET, a Department of War project. GPS, which powers everything from navigation to precision agriculture to financial trading, was developed by the US military. Jet engines, semiconductors, digital photography, touchscreen technology, and voice recognition systems all emerged from government-funded research, often with defense applications in mind. Your engagement with the Department of War would place Anthropic within this tradition of collaboration, exposing your team to cutting-edge research problems, substantial resources, and technical challenges that will drive innovation in ways that benefit not just national security but your broader commercial applications and technological advancement.
It is not Anthropic’s place to decide for the American people, their elected representatives, or their military leadership how these tools should be employed in defense of the nation. That determination belongs to our democratic institutions and the chain of command established by our Constitution. Your obligation is simply to provide your technology and allow those institutions to function as designed.
The work you have created is too important, too vital to the future security of the United States, to be withheld from those charged with defending it. Einstein concluded that refusing to engage would not have stopped nuclear weapons, it would only have changed who developed them first. You have the opportunity to ensure your technology serves American national security interests rather than forcing the Pentagon to settle for inferior alternatives while adversaries face no such constraints.
The decision is not whether to save the world or save your conscience. It is whether to ensure the most capable AI technology, your technology, is available to defend democratic institutions, or to handicap those institutions while hoping our adversaries will show similar restraint. My experience in combat tells me which choice better serves both our security and our values. Your remarkable work deserves to serve the nation that made it possible.
Respectfully,
Scott E. Rutter
Scott E. Rutter
President and CEO
Former Commander, Task Force 2-7 IN, 3ID (M)Operation Iraqi Freedom – OIF I
Valor Network Inc.
Service Disabled Veteran Owned Small Business (SDVOSB)
SBA Certified HUBZone Small Business
HireVets Gold Award
Direct: 845-709-4104