Actions have consequences. Military actions have very big, wide-ranging, and long-lasting consequences. However, military actions are ultimately within our control. Who we vote for, what we vote for, what we support and what we condemn, are all elements that directly shape our ideas of the military and its use. Understanding these issues is a necessary part of functional democratic participation, as well as an ethical engagement with the world.
In his May 18 article for the RAND blog (ISIS: Weakened but Still Potent), Collin P Clarke delivers an assessment…
As Deep Mind prepares to retire AlphaGo from professional play, a mere 14 months after its release, the implications for militarized AI are staggering. Following up on “The Military Implications of AlphaGo,” the present analysis explores how the progress of AlphaGo, as well as development of commercial drones and changes in international relations, over the past 14 months, is likely to reshape the role militarized AI.
Google Deep Mind Challenge (March 8-15, 2016), is likely a seminal moment not only in Go and human vs. machine competition, but also in military operations. With various databases and robotic tools at its disposal, one can imagine AlphaWar (a hypothetical militarized version of AlphaGo) would be able to take in real-time data from a battlefield and beyond, run a deep analysis of all available data and metadata, prioritize targets, key regions, etc. and carry out appropriate strikes faster and more efficiently – all without human input. The process itself is not new to our military. AlphaWar would only streamline the existing, human-based analysis, make faster decisions, and likely do it all better.
Context lets us read charitably. Reading charitably keeps us from becoming trolls, through decontextualization. We consider three examples of decontextualization, dealing with history, science, and cyber security – and how simple knowledge of context discredits the claims entirely.