7 May 2019

Legal regulation of AI weapons under international humanitarian law: A Chinese perspective

Qiang Li & Dan Xie

Arguably, international humanitarian law (IHL) evolves with the development of emerging technologies. The history of IHL demonstrated that any adoption of new technologies presents challenges to this body of law. With the advent of artificial intelligence (AI), the tendency has become even more apparent when humans attempt to achieve the military use of such technology. When it comes to weapons, a combination of weapons and AI technology has increasingly drawn attention from the international community.Following the high-tech weapon systems such as cyber-attack software and armed drones, combat robots of various types have been developed and employed.[1]Potentially, artificial intelligence will not only significantly increase the efficiency and lethal effect of modern kinetic weapons, but also will partially restrict or even completely eliminate human interventions in all aspects of strategy design, battle organization and tactics implementation.

AI weapons—also known as autonomous weapon systems (AWS), which have been defined by the ICRC as weapons that can independently select and attack targets, i.e., with autonomy in the ‘critical functions’ of acquiring, tracking, selecting and attacking targets—have raised a series of issues both legally and ethically. It is debatable whether such weapons/weapon systems with the functions of learning, reasoning, decision-making and the ability to act independent of human intervention should be employed in future battlefields. In all circumstances, they must be employed in accordance with the principles and rules of IHL.


Legal review before employment

The First Additional Protocol to Geneva Conventions (AP I)—to which China is a party—provides that States must fulfil their obligations to determine whether the employment of a new weapon, means or method of warfare would be prohibited by IHL or any other relevant rules of international law in some or all circumstances in the study, development, acquisition or adoption of that weapon (Art 36 API). More specifically, the legality of new weapons must be assessed using the following criteria:

First, are the new weapons prohibited by specific international conventions, such as the Chemical Weapons Convention, Biological Weapons Convention or Convention on Certain Conventional Weapons?

Second, would such weapons cause superfluous injury or unnecessary suffering, or widespread, long-term and severe damage to the natural environment (Art 35 API)?

Third, will such weapons likely have the effects of indiscriminate attacks (Art 51 API)?

Lastly, will such weapons accord with the principles of humanity and dictates of public conscience—the Martens Clause (Art 1(2) API)?[2]

This means that AI weapons must be incorporated in the legal framework of IHL with no exceptions. The principles and rules of IHL should and shall be applied to AI weapons.

Precautions during employment

Humans will make mistakes. The same is true for machines, however ‘intelligent’ they are. Since AI weapons are designed, manufactured, programmed and employed by humans, the consequences and legal responsibilities arising from their illegal acts must be attributed to humans. Humans should not use the ‘error’ of AI systems as an excuse to dodge their own responsibilities. That would not be consistent with the spirit and value of the law. Accordingly, AI weapons, or weapon systems, should not be characterized as ‘combatants’ under IHL and consequently take on legal responsibility. In any circumstance, the wrongful targeting made by AI weapon systems is not a problem of the weapon itself. Therefore, when employing AI weapons systems, programmers and end users are under a legal obligation to take all feasible precautionary measures ensuring such employment in accordance with the fundamental rules of IHL (Art 57 API).

Accountability after employment

If humans are responsible for the employment of AI weapons, who, of these humans, holds responsibility? Is it the designers, the manufacturers, the programmers or the operators (end users)? In the view of many Chinese researchers, the end users must take primary responsibilities for the wrongful targeting of AI weapons. Such an argument derives from the Article 35(1) of AP I which provides ‘in any armed conflict, the right of the Parties to the conflict to choose methods or means of warfare is not unlimited’. In the case of full autonomy of AI weapon systems without any human control, those who decide to employ AI weapon systems—normally senior military commanders and civilian officials—bear individual criminal responsibility for any potential serious violations of IHL. Additionally, the States to which they belong incur State responsibility for such serious violations which could be attributable to them.

Moreover, the targeting of AI weapon systems is closely tied to their design and programming. The more autonomy they have, the higher the design and programming standards must be in order to meet the IHL requirements. For this purpose, the international community is encouraged to adopt a new convention specific to AI weapons, such as the Convention on Conventional Weapons and its Protocols, or the Convention against Anti-personnel Mines and Convention on Cluster Munitions. At the very least, under the framework of such a new convention, the design standards of AI weapons shall be formulated, States shall be responsible for the designing and programming of those weapons with high levels of autonomy, and those States that manufacture and transfer AI weapons in a manner inconsistent with relevant international law, including IHL and Arms Trade Treaty, shall incur responsibility. Furthermore, States should also provide legal advisors to the designers and programmers. In this regard, the existing IHL framework does not fully respond to such new challenges. For this reason, in addition to the development of IHL rules, States should also be responsible for developing their national laws and procedures, in particular transparency mechanisms. On this matter, those States advanced in AI technology should play an exemplary role.

Ethical aspect

AI weapons—especially the lethal autonomous weapon systems—pose a significant challenge to human ethics. AI weapons do not have human feelings and there is a higher chance that their use will result in violations of IHL rules on methods and means. For example, they can hardly identify the willingness to fight of a human, or understand the historical, cultural, religious and humanistic values of a specific object. Consequently, they are not expected to respect principles of military necessity and proportionality. They even significantly impact universal human values of equality, liberty and justice. In other words, no matter how much they look like humans, they are still machines. It is almost impossible for them to really understand the meaning of the right to life. This is because machines can be well repaired and programmed repeatedly, but life is given to humans only once. From this perspective, even though it is still possible when employing of non-lethal AI weapons, highly lethal AI weapons must be totally prohibited on both international and national levels in view of their high-level autonomy. However, it should be acknowledged that this may not be persuasive reasoning, because it is essentially not a legal argument, but an ethical one.

Concluding remarks

We cannot predict whether AI will completely replace human resources and so-called robotic wars will emerge. However, it must be observed that there exists a huge gap between nations in terms of AI technological capabilities. It is still an unreachable goal for most countries to procure and militarily use such capabilities. In other words, some States may have the potential to employ AI weapons on the battlefield, while others may not. In such cases, it will inevitably be required to assess the legality of AI weapons and their employment, and IHL will be resorted to. And as a result, the imbalance in military technologies will probably cause the divergency in the interpretation and application of existing IHL rules. Nevertheless, it is important to note that the applicability of IHL to AI weapon systems is beyond all doubt.

Footnotes


Editor’s note

This post is part of the AI blog series, stemming from the December 2018 workshop on Artificial Intelligence at the Frontiers of International Law concerning Armed Conflict held at Harvard Law School, co-sponsored by the Harvard Law School Program on International Law and Armed Conflict, the International Committee of the Red Cross Regional Delegation for the United States and Canada and the Stockton Center for International Law, U.S. Naval War College.

Other blog posts in the series include
Ashley Deeks, Detaining by algorithm

See also
Eric Talbot Jensen, The human nature of international humanitarian law, August 23, 2018

DISCLAIMER: Posts and discussion on the Humanitarian Law & Policy blog may not be interpreted as positioning the ICRC in any way, nor does the blog’s content amount to formal policy or doctrine, unless specifically indicated.

No comments: