Swarming military drones bring new urgency to push for killer robot ban
Killer robots and swarming drones
A non-paywalled article in Foreign Policy carries the terrifying headline: Killer Flying Robots Are Here. What Do We Do Now? (Vivek Wadhwa and Alex Salkever, 5 July 2021).
The sub-title reads:
A new generation of AI-enabled drones could be used to terrible ends by rogue states, criminal groups, and psychopaths.
Another article, entitled Israel’s Drone Swarm Over Gaza Should Worry Everyone (Zak Kallenborn, defenseone.com, 7 July 2021), makes it clear the problem is far broader, encompassing potentially highly problematic uses by any state with the requisite capability. He writes:
In a world first, Israel used a true drone swarm in combat during the conflict in May with Hamas in Gaza.
It was a significant new benchmark in drone technology, and it should be a wakeup call for the United States and its allies to mitigate the risk these weapons create for national defense and global stability.
The Foreign Policy article highlights two extremely destabilizing developments:
- the ability of drones (like the Turkish-made Kargu-2 quadcopter) to “allegedly autonomously track and kill human targets on the basis of facial recognition and artificial intelligence” and
- the use of “swarms” of drones, with potentially a high degree of internal coordination, based on a “swarming algorithm”.
For a detailed look at the mechanics of “swarming”, including its “inherently autonomous nature”, see What Are Drone Swarms and Why Does Every Military Suddenly Want One? (David Hambling, forbes.com, 1 March 2021), written before the Israeli first use.
Note that the rules (algorithms) developed to programme the drones are based on “complex flocking behaviour” of birds and other collective movements of fish and insects.
A new weapon of mass destruction
Both articles compare drone swarms to weapons of mass destruction, with Kallenborn providing this succinct rationale in an earlier article in the Bulletin of the Atomic Scientists:
drone swarms combine two properties unique to traditional weapons of mass destruction: mass harm and a lack of control to ensure the weapons do not harm civilians.
In his latest article he identifies the further, fundamental problem:
As drone swarms scale into super-swarms of 1,000 or even up to a million drones, no human could plausibly have meaningful control.
That’s a problem, autonomous weapons can only make limited judgments on the civilian or military nature of their targets.
Urgent action needed to limit these deadly weapons
In our 4 June blog, we summarized ongoing efforts to rein in these lethal autonomous weapons (LAWS), aka killer robots. See also a report by Human Rights Watch setting out the position of states, as of the end of 2019, on banning them.
Arguing that drone warfare “disproportionately benefits forces of chaos rather than forces of liberty”, Wadhwa and Salkever, in their FP article, further state:
It is not too late to place a global moratorium on killer robots of all kinds, including unmanned aerial vehicles [drones].
Standing in the way, they argue, are the USA and China, who have
thus far refused to back calls for a ban on the development and production of fully autonomous weapons.
It is interesting to note, however, that China has in fact proposed a ban on their use, while the USA has resisted as “premature” any type of prohibition.
Kallenborn proposes a number of approaches to reduce the risk posed in particular by the use of swarms of drones, believing a broad prohibition on autonomous weapons may be unattainable.
These measures include:
- Discussion of whether new norms and/or treaties are needed to specifically govern and limit the use of drone swarms;
- The global sharing of “counter-swarm technology”; and
- New Security Council measures to help keep swarming drones out of terrorist hands.
Norms of responsible use of artificial intelligence (AI) for defence applications
In a 7 June 2021 post, Branka Marijan, a Senior Researcher at Project Ploughshares, assesses a US-led AI Partnership for Defense with Australia, Canada, Denmark, Estonia, Finland, France, Israel, Japan, Norway, South Korea, Sweden, and the United Kingdom.
She writes:
The intent is to create standards of ethical and responsible uses of AI and, likely, to promote better integration and interoperability among military partners.
This effort is widely understood to be motivated by the common desire to respond effectively to the adoption and use of AI by China and Russia.
However, Marijan emphasizes the differing views among participants on issues like autonomous weapons, with Canada seemingly committed to a global ban and France seemingly blurring the line with its interpretation of partially autonomous weapons.
She argues:
there needs to be a conversation that goes well beyond the current 13 Partners.
A global conversation on the use of AI in defence applications is critical and urgently needed.
What about NATO?
In our 4 June blog we called upon Canada to take a strong position at the then upcoming NATO Summit in support of a ban on the development and use of fully autonomous weapons systems.
The NATO Communiqué at paragraph 36 makes only one specific reference to autonomous weapons technologies, stating that:
Through NATO-supported multinational cooperation projects, Allies are committed to working together to develop or acquire new capabilities in key areas such as …autonomous systems….
Paragraph 37 contains one positive, albeit passive, reference to regulation of emerging disruptive technologies (EDTs), which would include lethal autonomous systems, with the following statement:
This strategy outlines a clear approach for identifying, developing, and adopting EDTs at the speed of relevance, guided by principles of responsible use, in accordance with international law, and taking into account discussions in relevant international fora. [emphasis added]
In the view of Ian Davis of NATO Watch:
Exactly where the alliance falls on the spectrum between permitting AI-powered military technology in some applications and regulating or banning it in others is expected to be part of the Strategic Concept debate.
The 2021 NATO Summit formally launched work on the new Strategic Concept, to replace that developed in 2010. It is to be approved at the next summit in 2022. (See Communiqué paragraph 6.)
Reminding us of the UN Secretary-General’s support for a total prohibition on “morally repugnant” weapons that could, by themselves, target and attack human beings, Davis outlines the critical role that NATO can and should play to that end:
With NATO leadership such weapons could be banned by a treaty similar to the initiatives that successfully prohibited antipersonnel landmines in 1997 and cluster munitions in 2008.
Preserving meaningful human control over the use of force is an ethical imperative and a legal necessity.
The development of a new NATO Strategic Concept is an opportunity for member states to lead by example in the global effort to ban killer robots and to develop new norms guiding the use of artificial intelligence for defence purposes.
In the view of Ceasefire.ca:
This would be a tangible demonstration by NATO members of their oft-proclaimed declaratory commitment to strengthening the rules-based international order.
Whither Canada?
The 2020-2021 Group of Governmental Experts of the Convention on Certain Conventional Weapons (CCW) could not agree on a substantive report. This failure underscores the need for Canada to consider additional opportunities, outside that forum, for building consensus on banning lethal autonomous weapons and on developing responsible norms more generally for the use of AI in defence applications.
We call on the Government of Canada, in the context of NATO’s development of a new Strategic Concept, and in other forums, to champion a ban on lethal autonomous weapons and work to promote responsible norms for the use of AI in defence applications.
Article 36 and legal reviews of new weapons
Target profiles are the way that autonomous weapons identify targets.
For those interested in delving into the way that autonomous weapons identify their targets, click here for both a podcast and a full transcript.
The podcast is the work of Article 36, a specialist non-profit organization focused on reducing harm from weapons, with emphasis on autonomous weapons and explosive weapons.
The organization’s name refers to Article 36 of the Additional Protocol 1 of the 1949 Geneva Convention that requires states
to conduct legal reviews of all new weapons, means and methods of warfare in order to determine whether their use is prohibited by international law.
Update on rethinking security
For readers interested in a thought-provoking follow-up to last week’s blog on rethinking security, see: ‘Never going to happen’? Hope and Global Security by David Gee (rethinking security.org.uk, 8 July 2021).
Photo credit: Wikimedia/Shumagins (swarm behaviour)