“We should be cautious about undermining that society and the legitimacy of nation-states.”
On July 20, 1969, a Purdue graduate from Wapakoneta, Ohio, stepped onto the surface of the moon. I watched it on a black-and-white Zenith television sitting on the floor in the den of our New England farmhouse with my two brothers.
That den was not a big room, and the television was wedged between the fireplace and the family bookshelf. In the next room, under windows that looked out on a hedge of lilacs, was a stereo—a solid wooden cabinet the size of a dining-room sideboard. On its turntable we set a fragile arm with its embedded needle into grooves on black vinyl disks and listened to Broadway show tunes.
On the wall in the kitchen in the next room was an avocado green telephone with a rotary dial and an extra long cord so that my mother could talk on the phone while doing dishes.
In the cellar, my stepfather had a dark room. There we turned the plastic cartridges containing strips of light-sensitive plastic from our Kodak Instamatic cameras into black-and-white images, moving the paper carefully with tongs from one acrid pan of chemicals to another in the dim red light.
It was 1969. And Neil Armstrong took one small step for man; one giant leap for mankind. Just half a lifetime ago. And today, nearly everyone reading this has a computer in their pocket that is a television, and a bookshelf, and a music player, and a telephone, and a camera that is about a million times more powerful than all the computing power NASA had available to put a man on the moon.
Computers have changed nearly every aspect of our lives, and we should be asking ourselves how computers have changed our behaviors and our understanding, as well. Our kitchens, our professions, our cars, our health care, our entertainment, the way we communicate with people we love, the way we get our news or decide where to go out to dinner—is all different, because of the power of computing.
What has mankind done with computing power?
Much of the answer is good almost beyond imagining. We have pulled billions of people from subsistence to surplus through the economic growth enabled by technology. We have advanced health care, and energy use, and education, and food production, and navigation.
At the same time, we are facing disadvantages brought to us by the computer revolution and its resulting intrusions on our privacy, and nearly addictive power over our brains.
In terms of computer-driven change, the military is little different from the rest of modern culture. If anything, the thinking we do about how computers have changed our practices and behaviors becomes even more pressing when we consider questions of war and defense. How do we, and how should we, think about computing and the use of force?
For most purposes, the consideration of just war is divided into two parts: the legitimacy of the resort to force (jus ad bellum), and the rules governing the conduct of hostilities (jus in bello). The growth of the power of computing challenges us in both of these areas in different ways.
With few exceptions, human societies have abhorred war even when they have found it to be sometimes justified. Modern international society accepts the use of force by some agents in world politics in particular circumstances. And a key moment in the development of this just-war analysis comes in the thirteenth century when St. Thomas Aquinas places the legitimate power to go to war (jus ad bellum) in the hands of a sovereign authority, writing, “In order that a war may be just three things are necessary. In the first place, the authority of the prince, by whose order the war is undertaken.” His second and third requirements for a just war, like those of his predecessor St Augustine, were a just cause and right intent.
The medieval thinkers of Thomas’s time were living in a period of tremendous social and political change. By the middle of the thirteenth century the centralized system of feudalism was losing ground and there was a new world of consolidated city-states and growing kingdoms.
A few centuries later, when Hugo Grotius was writing about international law in the sixteenth century, sovereigns had gradually asserted a monopoly on the use of violence both within their borders and against other sovereigns. By the close of the nineteenth century, sovereignty and the exclusive right to wage war were characteristics of a State so strongly established that to suggest otherwise would have seemed preposterous.
Over the same stretch of time, an international consensus gradually developed that the methods and means of warfare (jus in bello) are not unlimited. In the late nineteenth and early twentieth centuries these customary principles began to be codified in conventions and treaties between states intended to mitigate the horrors of war.
Warfare contains an inherent tension. On the one hand, there is the desire to win and the consequent tendency to use all necessary means to secure victory. On the other, there is the creditable human awareness that life has value, and that war is an abnormal state of affairs fought not to destroy a civilization, but to achieve a better peace. G.I.A.D. Draper has called this dichotomy between the purposes of the law of war and the nature of war “probably the most acute point of tension between law and life.”
These rules of war include, for example, that pillage is prohibited. The wounded and sick shall be collected and cared for. Hospitals and hospital ships shall not be attacked. Medics and chaplains shall be respected and protected. Combatants may surrender. In some cases, entire categories of weapons have been proscribed, like the 1925 Geneva Convention prohibiting the use of poison gas in warfare.
I often find a skepticism among those who have never served in the military that these rules of warfare influence the behavior of states and their combatants.
In August 2017, the U.S. Air Force chief of staff and I went to visit airmen in Iraq and Afghanistan. The day we arrived in Iraq, the Iraqi government, with the support of coalition allies had launched the effort to take back the town of Tal Afar from ISIS.
We were standing in the operations center—a room about the size of a large classroom with screens on every wall—watching forty or fifty Airmen do their work. The Iraqi military on the ground had reported they were taking fire from a particular building. A remotely piloted aircraft, a drone, was guided overhead and trained its camera on the building, feeding full-motion video images to the screen on the wall of the operations center.
An Airman in charge of coordinating air support was on the radio with an F-15 fighter jet who reported the weapons he had available and that he could stay overhead five more minutes before he was out of fuel and would have to return to base. Another Airman at a console in the back was scanning maps of prohibited targets, while an Airman next to him calculated the effects of munitions and the probability of collateral damage if particular weapons were used.
And before the Airman in charge gave the instructions to the F-15 pilot to engage, he looked toward a young captain standing calmly in the middle of the buzzing room. She was the JAG officer, the lawyer. Cognizant of the law of war, monitoring the work going on around her, she returned the glance from the air boss, nodded and gave a thumbs up. The air boss gave the order to engage the target and it was destroyed.
There were two reporters with us that day and I was glad they saw, unscripted and in real time that the United States brings its values to the fight. We train our military to comply with the law of war, even under pressure and under fire. It doesn’t mean we are perfect; there is nothing perfect on this side of the New Jerusalem. They a
lso saw the precision that computing has brought to the application of air power.
Just as computing has changed how we pay our electric bill, computing has changed how we fight. In his book, Masters of the Air, Donald Miller tells the story of the bomber boys of the Eighth Air Force in the Second World War. Thousands of bombers based in the United Kingdom attacked aircraft factories and oil production and rail lines in places like Schweinfurt and Ploesti.
The accuracy wasn’t very good and the casualties were staggeringly high. More men died in the Eighth Air Force—twenty-six thousand dead—than in all of the United States Marine Corps during World War II. And the casualties of innocents on the ground were also high. Even by the Vietnam era, when Neil Armstrong walked on the moon, our bombing and navigation and situational awareness were not very good.
Computing changed that. Today the Air Force flies planes like the F-35 fighter. The F-35 is, in essence, a high-performance computer wrapped in a stealthy aircraft. It connects and shares with a network of space, ground, cyber and manned and unmanned airplanes to observe, orient, decide and act faster than adversaries. It sees first. It can shoot first. And it hits what it shoots because of computing power.
At its best, computing in warfare allows us to achieve just objectives to protect the nation and our vital national interests, while minimizing unnecessary destruction and risk to our military and innocent civilians. I would argue that, to this point in history, computing in warfare has allowed us to make better decisions as combatants. War is a horrible thing, and it remains imprecise, but the jus in bello effect of computers has been generally a movement toward greater precision and more narrow applications of force.
We must still face, however, the jus ad bellum effect of computers. Thus far, the application of computing in warfare has not really changed much about the authority to use force, which we still place within the province of the sovereign state. And, to the extent that computing has made militaries more accurate, it has advanced compliance with the principles of the humanitarian law of war.
But, like the canonists in the thirteenth century when the feudal system was dying, we live in a time of tremendous social and technological change. Consider Go, a complicated game of Chinese origin. In 2016, a London laboratory called DeepMind developed the first computer program to defeat a world champion at Go. The program was trained on thirty million moves played by human experts, and it had some capacity to learn.
Last fall, a new version of DeepMind’s AlphaGo program was released: a computer program that did not use any moves from human experts to train. It learned by playing millions of games against itself. After that training it took on its predecessor program—with already the strongest play in the world—and defeated it one hundred games to zero. And then, using something called “transfer learning,” it was also able to defeat chess computers at chess.
Pause to think about that for a moment. A computer program that used no human data beat other machines at a game it was not programed to play.
It’s not hard to imagine how machine learning like this will change our lives dramatically—far more and far faster than we have experienced in the years since Neil Armstrong walked on the moon. And, to be sure, this Machine Learning will enable new modes of warfare.
That should give us pause. As people of conscience, we are afraid that machines will teach themselves how to win the game irrespective of any moral code, undermining the limitations on the use of force that our societies have built over centuries. Will machines decide what to do based on utility or based on a moral worldview? How will strategists incorporate the use of Artificial Intelligence to influence the decisions of adversaries? How will computers navigate the inherent tension in war between the necessity to win and the need to understand that war is an abnormal condition fought for the sake of a better peace? In particular, how will Artificial Intelligences calculate the jus ad bellumconsiderations for going to war, when the jus in belloprecisions brought about by computers make the costs of going to war appear less destructive than they might have been through the past hundred years? What will restraint an Artificial Intelligence if it determines that acting first has inherent advantages?
When writing about World War I, Barry Hunt described the opposed systems of alliances as “propelled by a grim self-induced logic.” We should be wary of self-induced logic in warfare. There is a moral imperative here. When it comes to warfare, humans must continue to bound and decide the why and the when, even as computers are increasingly engaged in the how.
But it isn’t just the means of warfare, the way we conduct warfare, that is being challenged at the moment. In the wake of World War II, governments were the primary sponsors of basic research. Today, there are seven very large companies that are the leaders in Artificial Intelligence. All of them are headquartered in America or China. More high-risk, long-term research in Artificial Intelligence and Machine Learning is being funded by companies and their wealthy owners than by governments.
And there is certainly tension when the position of a private company and a government are not aligned. Over the past year, technology companies have been heavily criticized for the use and sale of personal information and their obligation to police their platforms. Companies are responsible to their shareholders, primarily, but also to some extent to their customers and employees. And in June 2018, after employee protests, Google CEO Sundar Pichai published a set of Artificial Intelligence principles. Google, he announced, would not develop Artificial Intelligence for use in weapons.
Google is just one company, though a very large one. And, in our society, companies are collections of people who can choose how they will use their money and their talents. But when a handful of large companies control the power of Artificial Intelligence, it raises questions about what entities will make decisions about its application and its impact on our lives in the United States and around the world. We may be living in a time when power is shifting again, not toward popes or feudal lords, but to companies who control tools that learn and act in ways that we are only beginning to understand.
Hedley Bull once described the system of sovereign states as an “anarchical society.” Anarchical, that is to say, but still a society and consequently guided by some rules and norms of behavior. We should be cautious about undermining that society and the legitimacy of nation-states.
Almost fifty years ago, on July 20, 1969, Neil Armstrong stepped onto the surface of the moon. In the generation since, we have all witnessed a profound revolution largely enabled by the power of computing. And yet, even greater change may be coming. There are children alive today who, fifty years from now, may think that things changed quite slowly for the generation after 1969, when compared to the pace of change they will have navigated in their lives.
I hope, from time to time, they pause, and think carefully about the moral choices they will make.
— This First Appeared in The National Interest —
Heather Wilson is the twenty-fourth Secretary of the Air Force[10]
Image: The sun sets behind an Australian F-35A Lighting II aircraft at Luke Air Force Base, Ariz., June 27, 2018. The first Australian F-35 arrived at Luke in December, 2014. Currently six Australian F-35’s are assigned to the 61st Fighter Squadron where their pilots train alongside U.S. Air Force pilots. (U.S. Air Force photo by Staff Sgt. Jensen Stidham). Flickr / U.S. Department of Defense