19 January 2019

The Weaponization Of Artificial Intelligence


Technological development has become a rat race. In the competition to lead the emerging technology race and the futuristic warfare battleground, artificial intelligence (AI) is rapidly becoming the center of global power play. As seen across many nations, the development in autonomous weapons system (AWS) is progressing rapidly, and this increase in the weaponization of artificial intelligence seems to have become a highly destabilizing development. It brings complex security challenges for not only each nation’s decision makers but also for the future of the humanity.

The reality today is that artificial intelligence is leading us toward a new algorithmic warfare battlefield that has no boundaries or borders, may or may not have humans involved, and will be impossible to understand and perhaps control across the human ecosystem in cyberspace, geospace and space (CGS). As a result, the very idea of the weaponization of artificial intelligence, where a weapon system that, once activated across CGS, can select and engage human and non-human targets without further intervention by a human designer or operator, is causing great fear.


The thought of any intelligent machine or machine intelligence to have the ability to perform any projected warfare task without any human involvement and intervention -- using only the interaction of its embedded sensors, computer programming and algorithms in the human environment and ecosystem -- is becoming a reality that cannot be ignored anymore.

Weaponization of Artificial Intelligence

As AI, machine learning and deep learning evolves further and moves from concept to commercialization, the rapid acceleration in computing power, memory, big data and high-speed communication is not only creating innovation, investment and application frenzy, but is also intensifying the quest for AI chips. This ongoing rapid progress and development signify that artificial intelligence is on its way to revolutionizing warfare and that nations are undoubtedly going to continue to develop the automated weapons system that AI will make possible.

When nations individually and collectively accelerate their efforts to gain competitive advantage in science and technology, the further weaponization of AI is inevitable. Accordingly, there is a need to visualize what would an algorithmic war of tomorrow look like, because building autonomous weapons systems is one thing but using them in algorithmic warfare with other nations and against other humans is another.

As reports are already emerging of complex algorithmic systems supporting more and more aspects of war-fighting across CGS, the truth is that the commoditization of AI is a reality now. As seen in cyberspace, automated warfare (cyberwarfare) has already begun -- where anyone and everyone is a target. So, what is next, geo-warfare and space-warfare? And, who and what will be the target?

The rapid development of AI weaponization is evident across the board: navigating and utilizing unmanned naval, aerial, and terrain vehicles, producing collateral-damage estimations, deploying “fire-and-forget” missile systems and using stationary systems to automate everything from personnel systems and equipment maintenance to the deployment of surveillance drones, robots and more are all examples. So, when algorithms are supporting more and more aspects of war, it brings us to an important question: what uses of AI in today and tomorrow's war should be allowed, restricted and outright banned?

While Autonomous Weapons Systems are believed to provide opportunities for reducing the operating costs of weapons system -- specifically through a more efficient use of manpower -- and will likely enable weapons systems to achieve greater speed, accuracy, persistence, precision, reach and coordination on the CGS battlefield, the need to understand and evaluate the technological, legal, economic, societal and security issues still remains.

Role of Programmers and Programming

Amidst these complex security challenges and the sea of unknowns coming our way, what remains fundamental for the safety and security of the human race is the role of programmers and programming along with the integrity of semiconductor chips. The reason behind this is programmers can define and determine the nature of AWS (at least in the beginning) until AI begins to program itself.

However, if and when a programmer who intentionally or accidentally programs an autonomous weapon to operate in violation of the current and future international humanitarian law (IHL), how will humans control the weaponization of AI? Moreover, because AWS is centered on software, where should the responsibility for errors and the manipulation of AWS systems design and use lie? That brings us to the heart of the question -- when and if an autonomous system kills, who is responsible for the killing, irrespective of whether it is justified or not?

Cyber-Security Challenges

In short, algorithms are by no means secure—nor are they immune to bugs, malware, bias and manipulation. And, since machine learning uses machines to train other machines, what happens if there is malware or manipulation of the training data? While security risks are everywhere, connected devices increase the ability of cybersecurity breaches from remote locations and because the code is opaque, security is very complex. So, when AI goes to war with other AI (irrespective if that is for cyber-security, geo-security, or space-security), the ongoing cybersecurity challenges will add monumental risks to the future of humanity and the human ecosystem in CGS.

While it seems autonomous weapons systems are here to stay, the question we all individually and collectively need to answer is will artificial intelligence drive and determine our strategy for human survival and security, or will we?

Acknowledging this emerging reality, Risk Group initiated the much-needed discussion on autonomous weapons systems with Markus Wagner, a Published Author and Associate Professor of Law at the University of Wollongong based in Australia.

Disclosure: Risk Group LLC is my company

Risk Group discusses Autonomous Weapons System and Law with Markus Wagner, a Published Author and Associate Professor of Law at the University of Wollongong based in Australia on Risk Roundup.

What's Next?

As nations individually and collectively accelerate their efforts to gain competitive advantage in science and technology, further weaponization of AI is inevitable. As a result, the positioning of AWS would alter the very meaning to be human and will in no uncertain terms alter the very fundamentals of security and the future of humanity and peace.

It is important to understand and evaluate if the autonomous arms race cannot be prevented, what could go wrong. It is time to acknowledge the fact that simply because technology may allow for the successful development of AWS it does not mean that we should. It is perhaps not in the interest of humanity to weaponize artificial intelligence! It is time for a pause.

No comments: