What does it mean to engage in rebellion against a state? What are the ethical and practical implications of rebellion? Under what conditions is rebellion an option? Given our frequent training and arming of foreign rebels, it behooves us to have a functional understanding of these questions.
Actions have consequences. Military actions have very big, wide-ranging, and long-lasting consequences. However, military actions are ultimately within our control. Who we vote for, what we vote for, what we support and what we condemn, are all elements that directly shape our ideas of the military and its use. Understanding these issues is a necessary part of functional democratic participation, as well as an ethical engagement with the world.
In his May 18 article for the RAND blog (ISIS: Weakened but Still Potent), Collin P Clarke delivers an assessment…
As Deep Mind prepares to retire AlphaGo from professional play, a mere 14 months after its release, the implications for militarized AI are staggering. Following up on “The Military Implications of AlphaGo,” the present analysis explores how the progress of AlphaGo, as well as development of commercial drones and changes in international relations, over the past 14 months, is likely to reshape the role militarized AI.
Google Deep Mind Challenge (March 8-15, 2016), is likely a seminal moment not only in Go and human vs. machine competition, but also in military operations. With various databases and robotic tools at its disposal, one can imagine AlphaWar (a hypothetical militarized version of AlphaGo) would be able to take in real-time data from a battlefield and beyond, run a deep analysis of all available data and metadata, prioritize targets, key regions, etc. and carry out appropriate strikes faster and more efficiently – all without human input. The process itself is not new to our military. AlphaWar would only streamline the existing, human-based analysis, make faster decisions, and likely do it all better.