Warrior Video Above: USS Zumwalt Commander Capt. Carlson Describes Riding the Stealthy Ship in Stormy Seas
By Crispin Rovere,The National Interest
Graham Allison alerts us to artificial intelligence being the epicenter of today’s superpower arms race.
Drawing heavily on Kai-Fu Lee’s basic thesis, Allison draws the battlelines: the United States vs. China, across the domains of human talent, big data, and government commitment.
Allison further points to the absence of controls, or even dialogue, on what AI means for strategic stability. With implied resignation, his article acknowledges the smashing of Pandora’s Box, noting many AI advancements occur in the private sector beyond government scrutiny or control.
However, unlike the chilling and destructive promise of nuclear weapons, the threat posed by AI in popular imagination is amorphous, restricted to economic dislocation or sci-fi depictions of robotic apocalypse.
Absent from Allison’s call to action is explaining the “so what?”—why does the future hinge on AI dominance? After all, the few examples (mass surveillance, pilot HUDs, autonomous weapons) Allison does provide reference continued enhancements to the status quo—incremental change, not paradigm shift.
As Allison notes, President Xi Jinping awoke to the power of AI after AlphaGo defeated the world’s number one Go human player, Lee Sedol. But why? What did Xi see in this computation that persuaded him to make AI the centerpiece of Chinese national endeavor?
The answer: AI’s superhuman capacity to think.
To explain, let’s begin with what I am not talking about. I do not mean so-called “general AI”—the broad-spectrum intelligence with self-directed goals acting independent of, or in spite of, preferences of human creators.
Eminent figures such as Elon Musk and Sam Harris warn of the coming of general AI. In particular, the so-called “singularity,” wherein AI evolves the ability to rewrite its own code. According to Musk and Harris, this will precipitate an exponential explosion in that AI’s capability, realizing 10,000 IQ and beyond in a matter of mere hours. At such time, they argue, AI will become to us what we are to ants, with similar levels of regard.
I concur with Sam and Elon that the advent of artificial general superintelligence is highly probable, but this still requires transformative technological breakthroughs the circumstances for which are hard to predict. Accordingly, whether general AI is realized 30 or 200 years from now remains unknown, as is the nature of the intelligence created; such as if it is conscious or instinctual, innocent or a weapon.
When I discuss the AI arms race I mean the continued refinement of existing technology. Artificial intelligence that, while being a true intelligence in the sense of having the ability to self-learn, it has a single programmed goal constrained within a narrow set of rules and parameters (such as a game).
To demonstrate what President Xi saw in AI winning a strategy game, and why the global balance of power hinges on it, we need to talk briefly about games.
Artificial Intelligence and Games
There are two types of strategy games: games of “complete information” and games of “incomplete information.” A game of complete information is one in which every player can see all of the parameters and options of every other player.
Tic-Tac-Toe is a game of complete information. An average adult can “solve” this game with less than thirty minutes of practice. That is, adopt a strategy that no matter what your opponent does, you can correctly counter it to obtain a draw. If your opponent deviates from that same strategy, you can exploit them and win.
Conversely, a basic game of uncertainty is Rock, Scissors, Paper. Upon learning the rules, all players immediately know the optimal strategy. If your opponent throws Rock, you want to throw Paper. If they throw Paper, you want to throw Scissors, and so on.
Unfortunately, you do not know ahead of time what your opponent is going to do. Being aware of this, what is the correct strategy?
The “unexploitable” strategy is to throw Rock 33 percent of the time, Scissors 33 percent of the time, and Paper 33 percent of the time, each option being chosen randomly to avoid observable patterns or bias.
This unexploitable strategy means that, no matter what approach your opponent adopts, they won’t be able to gain an edge against you.
But let’s imagine your opponent throws Rock 100 percent of the time. How does your randomized strategy stack up? 33 percent of the time you’ll tie (Rock), 33 percent of the time you’ll win (Paper), and 33 percent of the time you’ll lose (Scissors)—the total expected value of your strategy against theirs is 0.
Is this your “optimal” strategy? No. If your opponent is throwing Rock 100 percent of the time, you should be exploiting your opponent by throwing Paper.
Naturally, if your opponent is paying attention they, in turn, will adjust to start throwing Scissors. You and your opponent then go through a series of exploits and counter-exploits until you both gradually drift toward an unexploitable equilibrium.
With me so far? Good. Let’s talk about computing and games.
As stated, nearly any human can solve Tic-Tac-Toe, and computers solved checkers many years ago. However more complex games such as Chess, Go, and No-limit Texas Hold’em poker have not been solved.
Despite all being mind-bogglingly complex, of the three chess is simplest. In 1997, reigning world champion Garry Kasparov was soundly beaten by the supercomputer Deep Blue. Today, anyone reading this has access to a chess computer on their phone that could trounce any human player.
Meanwhile, the eastern game of Go eluded programmers. Go has many orders of magnitude more combinations than chess. Until recently, humans beat computers by being far more efficient in selecting moves—we don’t spend our time trying to calculate every possible option twenty-five moves deep. Instead, we intuitively narrow our decisionmaking to a few good choices and assess those.
Moreover, unlike traditional computers, people are able to think in non-linear abstraction. Humans can, for example, imagine a future state during the late stages of the game beyond which a computer could possibly calculate. We are not constrained by a forward-looking linear progression. Humans can wonderfully imagine a future endpoint, and work backwards from there to formulate a plan.
Many previously believed that this combination of factors—near-infinite combinations and the human ability to think abstractly—meant that go would forever remain beyond the reach of the computer.
Then in 2016 something unprecedented happened. The AI system, AlphaGo, defeated the reigning world champion go player Lee Sedol 4-1.
But that was nothing: two years later, a new AI system, AlphaZero, was pitched against AlphaGo.
Unlike its predecessor which contained significant databases of go theory, all AlphaZero knew was the rules, from which it played itself continuously over forty days.
After this period of self-learning, AlphaZero annihilated AlphaGo, not 4-1, but 100-0.
In forty days AlphaZero had superseded 2,500 years of total human accumulated knowledge and even invented a range of strategies that had never been discovered before in history.
Meanwhile, chess computers are now a whole new frontier of competition, with programmers pitting their systems against one another to win digital titles. At t
he time of writing the world’s best chess engine is a program known as Stockfish, able to smash any human Grandmaster easily. In December 2017 Stockfish was pitted against AlphaZero.
Again, AlphaZero only knew the rules. AlphaZero taught itself to play chess over a period of nine hours. The result over 100 games? AlphaZero twenty-eight wins, zero losses, seventy-two draws.
Not only can artificial intelligence crush human players, it also obliterates the best computer programs that humans can design.
Artificial Intelligence and Abstraction
Most chess computers play a purely mathematical strategy in a game yet to be solved. They are raw calculators and look like it too. AlphaZero, at least in style, appears to play every bit like a human. It makes long-term positional plays as if it can visualize the board; spectacular piece sacrifices that no computer could ever possibly pull off, and exploitative exchanges that would make a computer, if it were able, cringe with complexity. In short, AlphaZero is a genuine intelligence. Not self-aware, and constrained by a sandboxed reality, but real.
Despite differences in complexity there is one limitation that chess and go both share – they’re games of complete information.
Enter No-limit Texas Hold’em (hereon, “Poker”). This is the ultimate game of uncertainty and incomplete information. In poker, you know what your hole cards are, the stack sizes for each player, and the community cards that have so far come out on the board. However, you don’t know your opponent’s cards, whether they will bet or raise or how much, or what cards are coming out on later streets of betting.
Poker is arguably the most complex game in the world, combining mathematics, strategy, timing, psychology, and luck. Unlike Chess or Go, Poker’s possibilities are truly infinite and across multiple players simultaneously. The idea that a computer could beat top Poker professionals seems risible.
Except that it has already happened. In 2017, the AI system “Libratus” comprehensively beat the best Head’s-up (two-player) poker players in the world.
And now, just months ago, another AI system “Pluribus” achieved the unthinkable—it crushed super high stakes poker games against multiple top professionals simultaneously, doing so at a win-rate of five big blinds per hour. For perspective, the difference in skill level between the best English Premier League soccer team and the worst would not be that much.
Having declared victory, the developers of Pluribus pulled its AI system from distribution on the grounds that they “don’t want to hurt the online poker ecosystem.”
If you believe that to be the only reason then, as Allison would say, I have a bridge to sell you. This is because poker mimics something else that Chess and Go do not—life.
Artificial Intelligence and Global Primacy
So how does an AI system that plays poker at superhuman levels translate to ultimate victory in great power competition?
To answer, reflect on this claim: The only difference between a game of uncertainty and real-life is the degree of resolution.
A positive example could be designing a city. Your objective is to maximize human wellbeing and economic production. Many things you can control: placement of highways, schools, public transportation, utilities, parks, shopping complexes; plus there is a degree of uncertainty: emerging technologies, demographic change, disruptions to industry and entertainment.
By specifying these known and unknown criteria, an advanced AI system can teach itself to formulate the perfect city, running trillions of simulations in pursuit of that goal to achieve superhuman levels of urban design.
But this humanist function is not the primary driver of AI advancement. Rather, it is the great power rivalry between China and the United States.
To illustrate how this plays out, imagine that you are China’s leader. Your strategic objective over the next three decades is to replace the United States as the dominant power in the Indo-Pacific and then the wider world.
You know many details about your adversary, including how many planes, ships, submarines, and soldiers that the United States presently fields, along with their specifications and capabilities to varying degrees of confidence. You know the contents of published budget forecasts and force development plans.
There is also uncertainty: classified American capabilities and projects, shifting alliances, unforeseen international crises and so on.
With this mountain of information there will be one optimal way for the Chinese to invest to maximally exploit America’s weaknesses while mitigating their own shortcomings and risks. Human military planners will not hold a candle to an AI system that has gamed this out hundreds of trillions of times.
In this domain, AI even has an advantage relative to poker. Poker professionals live and die in the gladiatorial arena, constantly innovating and improving, held accountable by personal financial ruin. Defense planners, meanwhile, beaver away in middling Government jobs doing cyclical tasks, compromised in outcomes by politics, bureaucracy, vested interest and internal rivalry.
As incomprehensible as might seem today, in the years ahead AI systems will be making nearly all military decisions during conflict. This is inevitable. Imagine facing an enemy led by all of history’s greatest generals combined into one. Further that they had played out each forthcoming battle as if they had a lifetime to prepare. Add to this that they never tire or give into fear or distraction, always performing at their absolute peak. Under these conditions, no human decision-maker stands a chance.
Bottom line: in a world in which two equivalent superpowers are in conflict, the one with the slightly worse AI loses every battle.
Right now many readers may find this fantastical, but consider AI like the internet in scale of disruptive change, only this time in the exclusive hands of a single dominant power.
A solution for how the AI arms race can be safely managed is not yet apparent. Right now, as Allison emphatically suggests, the United States must place AI at the absolute core of national achievement, or surrender global primacy to China. The choice truly is that binary.
Crispin Rovere is an Australian public servant and professional poker player. Formerly he was a Ph.D. candidate at the ANU’s Strategic and Defence Studies Centre (SDSC) and has published on nuclear policy. Crispin is the author of The Trump Phenomenon: How One Man Conquered America*.*
Image: Reuters.
This piece was originally published by The National Interest