Killer Robots – A Case for SAIs

Source: Adobe Stock Images, boscorelli

Author: Jan Roar Beckstrom, Chief Data Scientist,The Office of the Auditor General of Norway1

Killer robots, also known as lethal autonomous weapon systems (LAWS), is not science fiction. They exist. Soon we can have AI-powered drone swarms where the drones themselves decide who to kill and what to attack. SAIs should play a role in keeping the development and use of such weapons under human control, in line with international law.

Introduction

Imagine this: You take a swarm of very small drones, load them with an AI-algorithm trained to recognise a certain type of military uniform, add 5 grams of high explosives, and send them out to hunt for enemies to kill.2 After deployment of the drones there is no human involvement, and the drones themselves decide who to target and attack. However, one of the drones decides to target and subsequently kills a soldier that is surrendering. This will be a clear violation of International Humanitarian Law (IHL), as expressed in the Geneva Conventions.3

Or what about a situation where an AI-powered LAWS decides to acts on a false positive and erroneously engages a similar enemy system, in a “clash of the machines”? Whereas the enemy system responds and also calls in reinforcements. Then, we might have an unintended war on our hands in seconds. Such weapons are not science fiction. The necessary technology is to a large degree already available and the remaining challenges are engineering problems of miniaturisation and systems integration.

The potential inherent in LAWS regarding death and destruction and a new arms race cannot be overestimated. While a total ban on certain types of LAWS may be a possibility, a far-reaching ban is difficult to foresee. The possible military advantage posed by such systems will probably be far too great and too tempting for the governments of the world. If anything, you will not want to be the only one without them. So, they need to be regulated and governments and armed forces need to be held accountable for the research, development, procurement, deployment and use of such weapons. “Accountability” is the cue for the SAIs of the world to enter the scene. 

Killer robots – the Technology

Weapons with some kind of autonomy have been around for a long time. Some simple examples are tripwires, anti-personnel mines and cruise missiles. These are typically “set-up-and-forget” or “fire-and-forget” systems. For example, once a cruise missile’s target is programmed and the missile is launched, it steers itself towards the target chosen by humans. In addition, the amount of time between launch and impact by a cruise missile is normally quite limited, which is important in order to avoid civilian causalities.  

The new thing about LAWS is that artificial intelligence (AI) has entered the field. Cruise missiles are preprogrammed and they don’t themselves make any decisions on which target to hit. AI-powered robots, for example drones, can make such decisions. Then we are in a situation where machines decides, without human intervention, who might live and who might die.

To be able to do this machines need to be equipped with an AI-algorithm based on machine learning. Take the example of killer drones: By using machine learning, you can, for example, train an algorithm to separate between civilians and military personnel, by feeding labelled images to the algorithm. Simply, image1 = “civilian”, image2 = “enemy soldier”, repeat a few thousand times with similar images and you have taught the machine to discriminate between civilians and soldiers. Such an algorithms should be very good (perhaps 99% correct) at separating between these two groups. It is not very different from an AI-powered spam filter deciding what is spam and not.

Another “feature” with AI-powered LAWS is that, as the machine itself potentially can decide when to attack, it does not have to attack immediately. A drone might “loiter” until the probability for maximum success reaches a certain threshold. For example where “estimated number of casualties > 5”, as predicted by the proximity of probable enemy soldiers within view of the drone’s camera. Such “functionality” could obvious be of interest for military commanders.

Meaningful Human Control 

It is an important dictum in international law that if you as a soldier kill an enemy, you should be well aware that you do. Human lives should not be taken lightly, not even in war. The importance of human agency counts for the entire military command chain, and it means that all use of lethal weapons should be under what has been labelled “meaningful human control”.4

The development and use of autonomous weapons has the potential to change this in fundamental ways. An AI-powered machine that itself makes the decision to kill civilian A instead of soldier B cannot be sent to the international criminal court in the Hague. To kill or not is not a moral question for a machine. It is merely a probability calculation by an algorithm.  So, who should be held accountable? The commander deploying the flawed killer robot? The department of defence which procured the system or paid for its development? The civilian contractor who developed the flawed algorithm? These are important but undecided questions.

Further, according to IHL armed forces should not use more force than necessary to achieve a military goal.5 This makes the choice of weapons important and dependant on operational understanding. Thus, the amount of time between launch of a weapon and impact becomes important. Thus, if a commander uses an autonomous weapon that he/she doesn’t really know when will strike, then it also becomes difficult to know if the said weapon was the right choice, given what was known about the situation. In addition to the fact that you don’t really have control over whether a soldier or civilian were targeted.

Killer Robots and the Role of SAIs

In 2015, at its 69th session, the General Assembly of the UN adopted resolution 69/228 on “Promoting and fostering the efficiency, accountability, effectiveness and transparency of public administration by strengthening supreme audit institutions”.6

The UN General Assembly here recognized “the important role of supreme audit institutions in promoting the efficiency, accountability, effectiveness and transparency of public administration”.

One of the defining traits of a state and its government is that it has monopoly regarding the use of military force in defending the territorial borders of the country. National defence is as such a central part of public administration which SAIs need to audit on behalf of the parliament. The scope for auditors cannot just be the more administrative and bureaucratic parts of the defence sector. It must also include the operational and “combat-near” parts as it is here efficiency, accountability and effectiveness of a country’s defence are first revealed. In addition, the defence sector is (naturally) to a large degree shrouded in secrecy, which itself is a reason why SAIs should address defence in order to secure accountability on behalf of the parliament. Basically, transparency fosters accountability. 

Still, sometimes SAIs often shy away from the raison d’etre of national defence; the possible use of military force, including which weapons are developed, procured and deployed. War is brutal. In a very basic sense it is about defeating the enemy by killing the soldiers of the opponent. How war is to be fought is regulated in IHL, as codified in the four Geneva Conventions with Additional Protocols.  These conventions, and especially the Additional Protocol I (API)7, define “the rules of war”.

Is it not a bit far-fetched that a SAI can audit what weapons are developed and eventually used? I think not. When a country has ratified relevant conventions on international law, these conventions can be used as audit criteria for SAIs.

For example, article 36 – “New Weapons” of the API states that:

“In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.”

Thus, a new weapon system, including  LAWS, should undergo a review when developed, procured or adopted, to decide if the weapon is legal to use in the course of war. This is a requirement that SAIs can check if it has been fulfilled. 

Further, article 57 – “Precautions in attack” of the API states that an attacker should: 

“take all feasible precautions in the choice of means and methods of attack with a view to avoiding, and in any event to minimizing, incidental loss of civilian life, injury to civilians and damage to civilian objects” 

and

“an attack shall be cancelled or suspended if it becomes apparent that the objective is not a military one”

If we start using lethal, autonomous machines, where the question of life and death is reduced to a probability equation: can we be certain that a machine will “take all feasible precautions” to spare civilians? How do we secure accountability for suspension of an attack on a non-military target, if the decision to attack or not is done by the machine itself? Are we starting to lose meaningful human control over lethal weapons?

These are big questions, and are far too important to leave for the defence sector itself to sort out. We cannot have accountability without external control. This means that a SAI is one of very few national institutions that can hold the government and defence sector accountable for the development, procurement, adoption and eventual use of autonomous killer robots. LAWS have the potential to make the world a much more dangerous place. Still, SAIs can definitely play an important role in reducing the dangers and risk associated with LAWS. We need to rise to the occasion. 

Back To Top