Military Implications of AlphaGo

Google Deep Mind Challenge (March 8-15, 2016), is likely a seminal moment not only in Go and human vs. machine competition, but also in military operations. During the week-long event, Google’s AlphaGo program defeated the top ranked world champion Go player Lee Sedol 4:1, and established computer dominance over humans in the last game-based bastion of human intellectual superiority. Unlike chess, or even Jeopardy, the complexity of Go rises to the extent that the traditional brute force methods (by which the computer is able to play out all possibilities and select the winning combination) are useless. For the sake of comparison, chess has a maximum of 1046.7 possible games, with 400 possible combinations for the first two moves on the board. Go has 10768 possible game permutations, with 129,960 possible combinations for the same two moves. In effect, Go is an incalculable game (due to complexity), and is based on balance, judgment, and harmony of play. Thus, it has remained beyond the reach of computers – until now.

AlphaGo’s decisive victory is based on programming that foregoes brute force in lieu of tactical analysis and strategic precision, taken from the Monte Carlo system designed for simulating nuclear explosions.[1] While the psychology literature remains scant in this field, the method used by AlphaGo is a relatively close mimicry of human thought, with the advantage of additional degrees of clarity and speed of computing the more calculable portions of the game. In essence, it mimics the strategic elements of human thought, except better. This follows closely from the AlphaGo development team – Deep Mind – whose ideology is the creation of a machine with AI functions following development of the human mind. As a result, unlike Deep Blue or Watson, Deep Mind systems are not designed for a single task, but are instead adaptive neural networks that can learn any process through analysis and simulation. More importantly, as AlphaGo has demonstrated, the neural network systems pioneered by Deep Mind are capable of innovation, taking their analysis and skill far beyond the programming and data input of the programmers.[2] The co-founder and CEO of Deep Mind – Demis Hassabis – designed the concept following his postdoctoral work in neuroscience.

The neural network system, created by Deep Mind, consists of a dual-layered process consisting of the policy network and the value network. The policy network takes all possible moves and trims them down to just the most likely moves that would be taken. This drastically lowers the amount of computation in the following searches. The value network is much more complex, generating a value indicative of how likely a particular move is to lead to victory. This is done not by running a tree search on each position but rather just on its learned data by being “taught” through 30 million different stone positions and analyzing data from games played solely on the policy network. AlphaGo then combines these two systems with the Monte Carlo Tree search. It runs trees only on the positions indicated by the policy network. These trees are then evaluated in two ways, by using the value network and a fast Rollout policy. The fast rollout policy makes a move every 2 µs. Using all this data, AlphaGo makes a decision. [3]

However, Deep Mind and AlphaGo have also changed the landscape of AI in general, in terms of possible military development and application. The past few decades have seen a rapid ramping up of military mechanization – in terms of aerial drones, robotic dogs, etc. – as well as the increasing use of technical data and metadata to create a deeper, and more nuanced picture of the battlefield, as well as the theater of war as a whole. The NSA construction of massive data storage centers in Salt Lake City and Fort Meade with 5 zettabyte and a yottabyte capacity (respectively) is an indication as to the sheer scale of present and anticipated data gathering.[4]

The complexities of integrating so much information and acting on it in a timely and efficient manner are surpassing the human capacity to effectively use the data. Again, Demis Hassabis notes that the problems facing us today, particularly as they apply here, are data overload and systems complexity. The former is the problem of parsing through the immense quantities of data to find the relevant parts. The latter, building on the former, is that the systems where this data is supposed to be used, are themselves overwhelmingly complex – requiring a lifetime of learning just to catch up to all the developments – never mind positing a new idea.[5] With the advent of programs like AlphaGo, whose strategic analysis is capable of meaningfully integrating incredible quantities of data, as well as the ability to keep learning 24/7, the time AI spends learning is significantly cut down, and the appropriation and integration of data is far more efficient.[6] Additionally, with the ability to develop and test strategies by simulation at a rate far beyond human, the neural network AI systems can find new and innovative correlations and strategies, and do so quickly.

With various databases and robotic tools at its disposal, one can imagine AlphaWar (a hypothetical militarized version of AlphaGo) would be able to take in real-time data from a battlefield and beyond, run a deep analysis of all available data and metadata, prioritize targets, key regions, etc. and carry out appropriate strikes faster and more efficiently – all without human input. The process itself is not new to our military. AlphaWar would only streamline the existing, human-based analysis, make faster decisions, and likely do it all better.

This leaves us at a crossroads. The first possible scenario is reminiscent of Skynet (of Terminator fame), and carries the danger inherent in the lack of human ethical oversight of military actions. However, just as with the innovative understanding of Go, AlphaWar may find innovative ethical solutions entirely in-line with our claimed morality – with horrifying consequences. This is the second possible scenario, on the other hand, is the process of such mechanized assaults is already happening, and the use of AlphaWar would likely make better decisions – in terms of analysis, scope, amount of model-integrated information, etc.

While various philosophers, scientists, and other specialists have called for a ban on the use of AI in warfare as late as August 2015,[7] the process they had in mind was primarily one of autonomous weapons – i.e. self-guided drones and the like. Additionally, up until Google’s Deep Mind challenge, a realistic possibility of truly autonomous military systems was seen as a fanciful idea, with Patrick Lin of California Polytechnic saying that, “a lot of people are rightly skeptical that [technology] would ever advance to the point where it has anything called full autonomy.”[8]

In 2012, the DoD issued a directive banning the use of autonomous and semi-autonomous weapons for 10 years. However, the concern of the directive is actually geared towards establishing “guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapons systems that could lead to unintended engagements.”[9] The kinds of “unintended engagements” the DoD had in mind are either the Blue-on-Blue (friendly fire) kind, or engagement without authorization – the kind currently faced by human elements of the military, as well as the automated defense measures, such as Israel’s Iron Dome.

In both cases, the idea behind what would comprise autonomous weapons was understood as a local mechanical entity, acting in a limited way, on the particular field of battle. Further, the projection of the possible timeline for such developments was posited as decades in the future. However, AlphaGo’s structure, scope, and analysis capabilities are not so much an issue in terms of particular drone deployment, as much as theater of war deployment. The type of problem anticipated by the DoD is equivalent to a matter of a gun jamming, or accidentally firing. The reality is more akin to creating a fully mechanized AI military. Further, the development of this technology is not a matter of some far off future – AlphaGo is already here. Thus, with the rather unexpected development of a technology that surpasses the best anticipation of some of the best minds in the field, it is not a matter of whether this technology gets integrated, but when.

The beauty of the second scenario is its potential to avoid or minimize bloodshed. The best case scenario is a future where the apparently inevitable war is actually concluded with two state computers facing off against each other in a series of simulated battles, concluding in the bloodless surrender or swift and relatively bloodless defeat of one side, in light of the superior strategies using a purely robotic military (akin to fight sequences from Yimou Zhang’s 2002 film Hero). More realistically, we can imagine a strategy developed and adjusted in real time, which could minimize the loss of life, destruction, and further spread of violence – all because the human elements that cloud the way to the best solution have been eradicated. It is a much happier version of the current practices.

The horror of the first scenario, Skynet notwithstanding, is the ruthless efficiency with which the AlphaWar would most likely operate. The enemies faced by the US are not on the same playing field even against the conventional military methods. Worse, the lack of coherent social structure in the regions the US tends to be militarily involved with, means that the line between civilian and militant is blurry, and the spatial difference even more so (e.g. three militants and two civilian families rent rooms in the same building). Rather than a bloodless war, the increased efficiency of AlphaWar promises a ruthless extermination campaign, with moves read and answered well in advance – whether seizing a village or sending a hellfire missile against “high probability” future insurgents. While this may seem rather fantastical, there are two arguments that settle it, unfortunately, solidly within the realm of high probability.

First, the military concern is primarily mission success, followed by limiting the loss of life on our part, and finally (as a tertiary concern) limiting collateral damage.[10] The greatest potential of an AlphaWar is the ability to read the large-scale theater well in advance of the physical developments, and act in a way that ensures large-scale and long-term success that would allow for the least loss of life on our part. This is literally the gameplay model of Go. Concern with limiting collateral damage is a handicap that would be placed on the strategic functionality of the system – equivalent of making sure that the only captures in a Go game were of extraneous pieces, even within the opposing territory, and never a destruction of the territory itself. Such a limitation would severely compromise the noted primary and secondary objectives – which is why such concerns do not play a major part of our current military or Go strategy. Hence, the addition of a handicap for the AlphaWar that would add a layer of restraint that the humans do not functionally operate with, is not likely to be a priority in the implementation of the system.[11]

Second, the current series of definitions used by the military, allow for exactly the kind of targeting of high probability future insurgents. Case in point, Abdulrahman Anwar al-Awlaki, the 16-year old son of US dissident and immaterial terrorism supporter Anwar al-Awlaki, was killed by a drone strike shortly after his father. This is despite the fact that he was a US citizen, had no criminal record, and no ties to any extremist groups (besides being the son of his father); thus justifying his execution only in terms of possible high likelihood future guilt by association. Alternately, the current classification of all males 15-65 killed in US drone strikes is as enemy combatants – regardless of whether there is any proof for the claim.[12] Thus, the only difference with an AlphaWar is that it would be able to carry out such strikes in greater number, with greater precision, and on a greater scope of potential future threats.

Given that the current human-based standards include precisely the kind of nightmare scenario mentioned, there is no reason to believe that replacing humans with a strategically better machine would produce different operational parameters or outcomes – save in degree.

As things stand, it is not clear just how quickly the military integration of software like AlphaGo will take place. The Hunter-Killer drones (e.g. Predator drones) only began after 9/11 (first use was Feb. 4, 2002), and are now a common go-to for military operations,[13] thus one can expect the new strategic technology to be meaningfully integrated within the next decade or so. The major hurdle is not the militarization of AlphaGo technology, but the notoriously bad data sharing between different agencies, to provide additional information to the system – such as metadata currently used to target supposed militants in “signature strikes”.

Whether this potential new development is morally good or bad is as yet unknown. However, given the current military trends, the safe bet is on increased bloodshed.


[Originally published:]

[Research Assistant: Connor F. Applegate]

[1] Levinovitz, Alan. The Mystery of Go, The Ancient Game That Computers Still Can’t Win. May 5, 2014. Wired. of-computer- go/
Remi Coulom pioneered the Monte Carlo integration in 2006.

[2] Hassabis, Demis. “The Theory of Everything.” YouTube. May 12, 2015. Accessed June 08, 2016.

ZeitgeistMinds Production

[3] Silver, David, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. “Mastering the Game of Go with Deep Neural Networks and Tree Search.” Nature 529, no. 7587 (January 28, 2016): 484-89. doi:10.1038/nature16961.

[4] Kramer, Melody. “The NSA Data: Where Does It Go?” National Geographic. June 12, 2013. Accessed June 08, 2016.

[5] Hassabis, Demis. “The Theory of Everything.”

[6] Hassabis, Demis. “The Theory of Everything.”

[7] International Joint Conference on Artificial Intelligence, Buenos Aires, July 25-31, 2015.

[8] Knight, Will. “Military Robots: Armed, but How Dangerous?” MIT Technology Review. August 03, 2015. Accessed June 08, 2016.

[9] US Department of Defense. Directive 3000.09. November 12, 2012. (emphasis mine)

[10] United States of America. Army. Army Strategic Planning Guidance 2014. By Raymond T. Odierno and John M. McHugh. 1-30. 2014. Accessed June 06, 2016.

[11] While the current military model does use some degree of moral calculus in order to limit collateral destruction of military operations, the Doctrine of Double Effect guidelines are notoriously unstable, and have done little to prevent the kind of destruction that has led to failed and failing states in Afghanistan, Iraq, Libya, and Syria, amongst others.

[12] “U.S. Labels ALL Young Men In Battle Zones As “Militants” … And American Soil Is Now Considered a Battle Zone.” Washington’s Blog. June 1, 2012. Accessed June 08, 2016.

[13] Sifton, John. “A Brief History of Drones.” The Nation. February 07, 2012. Accessed June 08, 2016.


2 thoughts on “Military Implications of AlphaGo”

Comments are closed.