October 25, 2022

Stop the Killer Robots

C.F. Da Silva Costa

Meanwhile, killer Robots are entering battlefields all around the globe…


During the UN Human Rights Council session this Thursday, September 29, the “Stop Killer Robots”1 campaign has been hosting an event to continue raising awareness of the problem of Lethal Autonomous Weapon systems (LAWs), the so-called “Killer robots”.

We are talking about Artificial Intelligence (AI) controlled weapons with no human in control. Weapons that can decide autonomously to attack and kill. In civil AI applications, errors are already pointed out. So when considering the risk to life and, potentially, some civilian lives killed due to error, it seems obvious that these weapons should be strictly regulated or simply banned. Alas, the reality is disconcerting!

For nine years, the discussions that have been going on at the Convention on Certain Conventional Weapons (CCW) – Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on LAWs) have been futile. The last CCW in July in Geneva reflects it. The last report is hollowed without meaningful conclusions or commitments2. Frictions are constant, and this diplomatic machine has seized up.

Lethal Autonomous Weapons

“Autonomous” refers to the possibility of these machines operating for a specific time interval without an operator. During this time, they can search for a target. Then there are three levels2 of autonomy. First, an operator is still required to order engaging the target. Second, it can engage the target without operators. The third level is “defining the target, ” i.e., the capability of choosing what it attacks: a building, tank, or soldier. Thankfully, no current system can define a target. Targets are based on programmed constraints and descriptions, defined by operators and actors higher up the chain of command3.

Countries are acquiring these technological advantages

Since the 80s, the US marine has used, for instance, the Phalanx Close-In Weapons System4, which can autonomously identify and attack incoming missiles in a range of 6km. With emerging technology, including AI, weapons have improved their autonomy and response time. Nowadays, the Iron Dome5 in Israel can intercept rockets in all weather in ranges up to 250km. No human would be able to detect rockets and trigger the system in enough short time span as it does.

Another advantage is the identification of Friends or Foes. Russia’s POM-3 anti-personnel mines6 used in Ukraine can cause ravages within a radius of 16 meters. Presently, they are activated by seismic sensors, but if equipped with AI, they could identify and only detonate when foes approached. Thus Russian soldiers would walk between them without any danger.

Nowadays, there are, unfortunately, already many autonomous weapons on the battlefield; the US MIM-104 Patriot missiles are on European bases, Russian tanks Uran-9 with auto-cannons in Syria, and autonomous sentry guns SGR-A1 are deployed in the Korean Demilitarized Zone. All are capable of engaging targets autonomously. Technologies continue to develop rapidly, and during these nine past years, new autonomous weapons with new capabilities appeared. In Libya, in 2020, a Turkish Kargu-2 drone hunted down and attacked a human target7. In mid-May 2021, the Israel Defense Forces used a swarm of small drones to locate, identify and attack Hamas militants8. A swarm is a single networked entity that flies itself using AI.

These weapons are already in use, so what are the problems?

The first main problem is that they could violate International Humanitarian Law (IHL)9, and in particular, principles of “distinction”10 and”proportionality”11.

Artificial Intelligence is based on Machine Learning, allowing systems to learn and solve complex problems. But they are limited to a specific task, like AlphaGo12, which defeated the world champion Go player. Its abilities are limited to the game only. We can not ask AlphaGo to switch on a light. We call it “Narrow AI”.

Thus, war machines can identify targets with specific characteristics like military uniform colors or a person holding a gun. What happens if we have a similar scenario in a complex environment like real life, some civilian wearing green sports clothes and holding an umbrella? Another limitation, in war, according to IHL, a soldier can surrender. But AI does not understand the context. It can not differentiate behaviors and may still attack soldiers surrendering! In both cases, the principle of “distinction” is violated.

The “proportionality” idea is that any attack consequences, civil suffering, should be “justified” in proportionality to military advantage. For example, destroying a school to prevent soldiers from hiding inside when they can also hide in other buildings is too much civilian suffering for no substantial advantage! AI can not operate this abstract thinking.

We saw that many errors could occur, and machines can not be held accountable. States can evade responsibilities by justifying any technical error. We can still try to apply the “strict liability”; if a state operates this machine, it is then accountable, but this may be difficult as long as no clear legislation is in place.

Finally, we need to look at more considerable consequences, the moral risks. A new arms race between northern hemisphere countries can start. For instance, an aerial attack piloted by humans will have little chance of success against autonomous defense systems, so the best answer is autonomous drones, and so on, LAWs against LAWs. Meanwhile, not having these technologies, the southern hemisphere will bear the brunt of these new weapons. In between all of this, there is a severe risk of dehumanization. Humans will be reduced to some data, which will be erased coldly by LAWs.


“Stop the Killer Robots” campaign logo

Already nine years of discussion

Since 2013, the convention on LAWs has tried to regulate it with its Group of Governmental Experts. In 2019, a set of 11 principles was accepted13. In summary, IHL continues to apply fully to LAWs, and that accountability must be ensured. But both these points are precisely what LAWs can not ensure. Furthermore, these principles are non-binding.

For the most ambitious states, it is capital to agree on a treaty that would guarantee the prohibition that a weapon can operate and engage targets autonomously. This is the position of many Latin Americans, European states, and some States in Africa, the Middle East, and Asia. Extraordinarily, China supports a ban but consents that research could continue. Some other states would agree on slightly different legislation that forces an operator to always be present, even if the machine has the autonomous capability to engage14.

During the conventions, institutions (EU, ICRC, UNIDIR), ONGs, and civil societies, allowed as observers, supported a ban. For example, ICRC recommends that “Unpredictable autonomous weapon systems should be expressly ruled out”, “…use of autonomous weapon systems to target human beings should be ruled out”, and “the design and use of autonomous weapon systems that would not be prohibited should be regulated”.

The position is less clear-cut for states already possessing LAWs. The US, for instance, consents to prohibit specific weapon systems and certain regulations but refuses a binding legal framework. Finally, Russia is slowing down all negotiations and reducing its content15.

Russia and the game of consensus

A majority of States are now convinced of the need to act significantly, even asking for more days to debate in 2023. But the main problem is the rule of consensus, which prohibited any discussion breakthrough.

Many little disagreements happened; for instance, delegations wasted time discussing whether the CCW is “an” or the “only” appropriate forum for dealing with this issue. These discussions have even been theatrical when Russia attacked many times the presence of civil societies to limit their intervention and participation in informal meetings. It is a tool to slow down the discussion, focusing the debate on organizational points. Meanwhile, some states, like Israel and India, are discrete and do not oppose it. They use this condition to their advantage. Russia is doing all the work for them.2; 15

Therefore, with a few states’ refusals, all the details about elements and possible measures to reach an agreement were removed. All conclusions about what kinds of control are necessary, and possible processes to achieve that control, were taken out. The present conclusions section outlines the types of proposals discussed, recognizes ethical perspectives, and repeats the respect for international humanitarian law. It confirms then that states are responsible for wrongful acts by international law, so there are no new laws2; 15

In conclusion, the CCW seems not to be the right platform anymore

Not only are the conclusions disappointing, but the way discussions were carried out was disappointing, and the mandate for 2023 remains uncertain.

In terms of technology, nine years is too long. Autonomy technologies are improving rapidly. We can not wait on the CCW. The slow process is to the advantage of countries using these
technologies. As long as no agreement is reached, countries will continue to deploy killer Robots on the battlefields all around the globe.

The Stop the Killer Robots campaign “urges states to demonstrate leadership through engaging in a new forum that is capable of making progress and provide a genuine way forward”. In other terms, we need a forum capable of imposing the will of the majority of states.

All the European left and the civil societies have to raise their voice, too, divulgating LAWs’ dangers. We have to put pressure on governments all around the globe.

C.F. Da Silva Costa M.S. in Strategic Protection of the Country’s System & Ph. D. in Physics. Represented Accord university, as a civil society, at the 25-29 July, Group of Governmental Experts on emerging technologies in the area of Lethal Autonomous Weapons Systems, ONU, Geneva.

Sources
1) https://www.stopkillerrobots.org
2) https://documents.unoda.org/wp-content/uploads/2022/08/CCW-GGE.1-2022-CRP.1-Rev.1-As-Adopted-on-20220729.pdf
3) An Introduction to Ethics in Robotics and AI; C. Bartneck, C. Lütge, A. Wagner, and S. Welsh, Springer Briefs in Ethics; https://doi.org/10.1007/978-3-030-51110-4, 2021, 93
4) Phalanx CIWS; Wikipedia contributors; https://en.wikipedia.org/w/index.php?title=Phalanx_CIWS&oldid=1109484240
5) Iron Dome; Wikipedia contributors; https://en.wikipedia.org/wiki/Iron_Dome
6) Human Rights Watch; Ukraine: Russian Landmine Use Endangers Civilians; https://www.hrw.org/news/2022/06/15/ukraine-russian-landmine-use-endangers-civilians
7) Report from the UN Security Council’s Panel of Experts on Libya: in 2020: https://digitallibrary.un.org/record/3905159?ln=fr Kargu Drone: https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav
8) Drone swarm: https://www.newscientist.com/article/2282656-israel-used-worlds-first-aiguided-combat-drone-swarm-in-gaza-attacks/
9) International Humanitarian Law: https://www.icrc.org/en/doc/assets/files/other/what_is_ihl.pdf
10) IHL principle of distinction: https://casebook.icrc.org/law/principle-distinction
11) IHL principle of proportionality: https://casebook.icrc.org/glossary/proportionality
12) Alpha Go: https://www.deepmind.com/research/highlighted-research/alphago
13) Automated Decision Research; Artificial intelligence and automated decisions: shared challenges in the civil and military spheres. September 2022 Report, www.automatedresearch.org
14) CCW LAWs 11 principles: https://www.un.org/disarmament/the-convention-on-certainconventional-weapons/background-on-laws-in-the-ccw/
15) Civil society perspectives on the Group of Governmental Experts of the Convention on Certain Conventional Weapons on Lethal Autonomous Weapon Systems 25–29 July 2022: VOL.10 NO.9 and NO10 29 July 2022 https://reachingcriticalwill.org/disarmament-fora/ccw/2022/laws/ccwreport