Killer robots, Ukraine’s dilemma, and silence over Israel’s democratic descent

UPDATED US POLICY ON KILLER ROBOT RAISES NEW QUESTIONS

Last week’s blog post considered an article by Henry Kissinger on ending the Ukraine war. Near the end of that article, he wrote:

Autonomous weapons already exist, capable of defining, assessing and targeting their own perceived threats and thus in a position to start their own war.

Once the line into this realm is crossed and hi-tech becomes standard weaponry – and computers become the principal executors of strategy – the world will find itself in a condition for which as yet it has no established concept.

How can leaders exercise control when computers prescribe strategic instructions on a scale and in a manner that inherently limits and threatens human input? How can civilisation be preserved amid such a maelstrom of conflicting information, perceptions and destructive capabilities?

US Department of Defense updates its Autonomous Weapons Policy Directive

On 25 January 2023 the US Department of released its Directive 3000.09 on autonomous weapons, clarifying a 2012 policy that has been described as

so unclear that even people inside the Pentagon had a hard time understanding it.

Writing in June 2022 on why the DOD was then updating its “decades old Autonomous Weapons Policy”, Gregory Allen, Executive Director of the AI Governance Project at the Centre for Strategic and International Studies, reviews the definitional issues.

He notes first that

Despite eight years of negotiations at the United Nations, there is still no internationally agreed upon definition of autonomous weapons or lethal autonomous weapons.

However, the US does formally define autonomous and semi-autonomous weapons systems from the perspective of US policy.

These definitions are:

  • Autonomous weapon system: A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.
  • Semi-autonomous weapon system: A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by a human operator.

Key requirements of the updated US policy

In its announcement of the updated DoD Directive 3000.09, the US Department of Defense emphasized, inter alia, the following requirements:

  • Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.
  • Persons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems will do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.
  • The design, development, deployment, and use of systems incorporating AI capabilities is consistent with the DoD AI Ethical Principles and the DoD Responsible AI (RAI) Strategy and Implementation Pathway.

Writing in Politico.com, national security reporters Matt Berg and Alexander Ward elaborate further on the need for the revisions:

Enacted in 2012, Directive 3000.09 was intended to set the record straight on how the department fields and develops autonomous and semi-autonomous weapons systems.

It had the opposite effect.

In fact, confusion was so widespread that it discouraged any development in this area. In the view of Gregory Allen,

officials were refraining from developing some systems that were not only allowed by the policy, but also expressly exempted from the senior review requirement….

Now the policy has been updated to clarify what types of autonomous weapons can be developed without a review process by senior officials. Those exemptions include:

autonomous weapons that involve a human operator; human-supervised autonomous weapons used for local defense; and autonomous weapons used to apply non-lethal force against targets.

The update adds one new exemption for human-supervised autonomous weapons that defend drones and does not change the current exemption for cyber weapons systems.

For more on these exemptions, see What Is The Pentagon’s Updated Policy On Killer Robots? (David Hambling, forbes.com, 31 January 2023).

The role of AI and the AI Ethical Principles Policy

According to Breaking Defense, quoting Michael Horowitz, the director of the Pentagon’s Emerging Capabilities Policy Office,

one of the biggest things the revised directive accounts for is the “dramatic, expanded vision” for the role of AI in future military operations.

Specifically, Horowitz elaborates on the applicability of DoD’s AI Ethical Principles policy:

And for autonomous weapons systems that incorporate artificial intelligence… the directive now specifies that they, like any system that uses artificial intelligence, whether a weapon system or not, would need to follow those guidelines.

Horowitz adds that the updated policy

continues to require that autonomous and semi-autonomous weapon systems be designed to allow commanders and operators to exercise appropriate human judgment over the use of force.

No killer robot ban in US policy

Interestingly, other defence commentators see the main role of the “tweaks” in the updated policy as one of dispelling confusion over whether the Pentagon has a “hard rule” against using lethal autonomous weapons.

Writing in DefenseOne, technology editor Patrick Tucker states:

The biggest change in the Defense Department’s new version of its 2012 doctrine on lethal autonomous weapons is a clearer statement that it is possible to build and deploy them safely and ethically but not without a lot of oversight.

Tucker continues:

That’s meant to clear up the popular perception that there’s some kind of a ban on such weapons.

The article concludes:

This 3000.09 update shows that the DoD believes that there are ways to responsibly and ethically use autonomous systems, including AI-enabled autonomous weapons systems that use lethal force.

The DoD believes that there should be a high bar both procedurally and technically for such systems, but not a ban.

One of the DoD’s [stated] goals in openly publishing this document is an effort to be a transparent world leader on this topic.

Meaningful human control versus “appropriate” human control

Recall the DoD announcement on the updated directive and, in particular, the following requirement, noted earlier in this section:

  • Autonomous and semi-autonomous weapon systems will be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

Against the backdrop of many years of UN discussions on how to control such arms, many activists, like the Campaign to Stop Killer Robots, want an outright ban on autonomous weapons, insisting any remote weapon remain under meaningful human control at all times.

Gregory Allen comments:

The DoD has consistently opposed a policy standard of ‘meaningful human control’ when it comes to both autonomous systems and AI systems….

The preferred DoD term of art is ‘appropriate levels of human judgement,’ which reflects the fact that in some cases – autonomous surveillance aircraft and some kinds of autonomous cyber weapons, for example – the appropriate level of human control may be little to none. [emphasis added]

Principle 3 of the DoD AI Ethical Principles requires that DoD’s engineering capacity is

sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems.

Ceasefire.ca comments:

Kissinger’s fundamental concern in this area is the sheer inability of meaningful human control as the role of AI in the operation of these weapons systems exponentially increases. The updated DoD guidelines do not respond to this concern; they reject it for an absurdly subjective standard of “appropriateness”.

Whither Canada?

We reiterate our call for the Government of Canada to work at the UN and in the NATO context, to fulfill its longstanding commitment to exercise a leadership role in the development of a ban on lethal autonomous weapons (aka killer robots) and to work to promote responsible norms for the use of Artificial Intelligence (AI) in defence applications.  

 UKRAINE UPDATE

Amid growing calls for yet more sophisticated weapons, including fighter jets, to be supplied by the West to Ukraine, more mainstream voices are beginning to speak out against a protracted conflict.

We reference two such perspectives in our post today: a new study by the Rand Corporation succinctly titled Avoiding a Long War (Samuel Charap & Miranda Priebe, January 2023) and an article by Owen Matthews in the UK Spectator entitled One year on: how will the Ukraine war end? (4 February 2023 magazine issue).

The Rand study is explicitly from the perspective of the United States, concluding that

The costs and risks of a long war in Ukraine are significant and outweigh the possible benefits of such a trajectory for the United States.

Washington Post highlights Rand study

The Washington Post featured the Rand study in an analysis by columnist Ishaan Tharoor entitled The argument for why the West should change course on Ukraine (1 February 2023).

Elaborating on the “structural” factors of the war — what the Rand study identifies as “key impediments” to the peace process — Tharoor writes:

Neither Russia nor Ukraine has a chance to secure “absolute victory” in the way they see it, yet both countries feel optimistic about their ability to win out in the longer run and are pessimistic about what may follow a cease-fire or uneasy peace.

The Rand study identifies the kind of instruments and steps that the United States can take to make “an eventual negotiated end to the conflict more likely” including:

  • clarifying plans for future support to Ukraine,
  • making commitments to Ukraine’s security,
  • issuing assurances regarding the country’s neutrality, and
  • setting conditions for sanctions relief for Russia.

The Rand report also points to the reality of the risk of a “hot war with a country that has the world’s largest nuclear arsenal”.

Its authors argue against entertaining such obvious risks when settling the conflict now would still mark a significant Russian defeat, writing:

It will take years, perhaps even decades, for the Russian military and economy to recover from the damage already incurred.

Recognizing that an “overnight shift” in US policy is both politically impossible and unwise, the Rand study concludes:

But developing these instruments now and socializing them with Ukraine and with U.S. allies might help catalyze the eventual start of a process that could bring this war to a negotiated end in a time frame that would serve U.S. interests.

The alternative is a long war that poses major challenges for the United States, Ukraine, and the rest of the world.

No good solution to this war, just the least bad option

For another detailed, and decidedly sobering, examination of the war’s trajectory — this time with a broader lens than US strategic interests — see One year on: how will the Ukraine war end (Owen Matthews, spectator.co.uk, 4 February 2023).

In addition to emphasizing the overriding need to keep the Ukraine conflict from turning into a world war, Matthews writes of a “clear gulf” between NATO’s vision of victory and Ukraine’s. This point is worth quoting in detail:

the fight to take back Crimea and the former rebel republics of the Donbas will entail fighting a very different kind of war. Instead of liberation, it will be a war of conquest.

Russia’s 2014 annexation of the Crimean Peninsula was clearly illegal and the subsequent referendum where 97 per cent of voters chose to join the Russian Federation was far from free or fair. But it’s equally clear that a significant majority of those living in Crimea now are Russians who do not wish to be Ukrainian.

While the situation in the Donbas is less clear-cut, Matthews argues that all available evidence suggests a majority of the population still living there will not welcome Ukrainian troops as liberators.

Matthews asks:

That raises a very uncomfortable question — does the West want to be in the business of coercing people to rejoin a nation they don’t wish to be part of?

He continues:

The tragedy of this war is that there is no equitable or safe solution.

To formally cede control of parts of Donbas and Crimea to Putin would reward aggression.

But the loss of Donbas and Crimea creates another grim possibility:

a cornered, collapsing, nuclear-armed Russia would risk precisely the Armageddon scenario which the US has been at such pains to avoid.

Split within NATO over further military assistance to Ukraine

Matthews elaborates on Alliance differences over further military assistance to Ukraine, writing:

Hungary, Austria [which is not a member of NATO] and Croatia remain defiantly opposed to sending more military hardware; the Italian right is deeply split; and there have been sporadic anti-war demonstrations in Germany and the Czech Republic. Not to mention a small but vocal caucus in the US Republican party….

In addition to this exposé of Alliance differences, Matthews also discusses the Kremlin’s still considerable military options, writing:

And in a military contest between quality and quantity — Kyiv’s superior morale, discipline, training and equipment vs Moscow’s Soviet-style steamroller might — unfortunately there comes a point where quantity wins.

In his view:

Russia cannot hope to win this war, but it still has a fighting chance of not losing it.

Zelensky’s precarious position

Perhaps most surprising is Matthews’ assessment of the precarious political position in which Ukrainian President Zelensky finds himself, despite his current popularity:

He has promised his people total victory, and polls say that close to 90 per cent of voters believe him. Failing to deliver would be politically fatal. So would signing any peace deal that involves a loss of Ukrainian land.

This conundrum, argues Matthews, will almost inevitably put Zelensky and his Western backers “on a collision course”:

If Putin advances, then announces a ceasefire and calls for talks, the Nato [sic] alliance will immediately split between those members who want justice and those who want peace.

Matthews concludes:

Tragically, there is almost no realistic outcome for this war that will not end in the Ukrainians crying ‘Betrayal!’

But if the alternative would be fighting World War 3, that may end up being the least bad option.

Ceasefire.ca comments:

Matthews blames Putin’s arrogance and Zelensky’s maximalism for the failure of the Turkish-facilitated negotiations in the early part of the conflict, but as we have previously contended, things might have been different if the US had engaged, particularly on the issue of Ukraine security guarantees and Russian sanctions relief.

UPDATE ON ISRAEL–PALESTINE

Entitled The U.S. shouldn’t tolerate Israel’s slide away from democracy (ottawacitizen.com, 1 February 2023), veteran columnist Andrew Cohen’s latest article in the Ottawa Citizen is an eloquent indictment of the silence of both the USA and Canada in the face of rising Israeli political extremism.

He begins his commentary thusly:

Israel is in crisis. Violence between Israelis and Palestinians is escalating. The new government of Benjamin Netanyahu — the most conservative and religious in the country’s history — threatens to weaken the judiciary, annex the West Bank, curtail same-sex rights and expel Palestinians.

He quotes a former chief of staff of the Israeli army, Moshe Ya’alon:

Who would have believed that less than 80 years after the Holocaust that befell our people, a criminal, messianic, fascist and corrupt government would be established in Israel, whose goal is to rescue an accused criminal.

To repeat, this is a lament by a former chief of staff of the Israeli army.

US has no “red lines” when it comes to Israel

And what of the US response? Cohen writes:

He [President Biden] is unable to say a discouraging word in public about an Israel that columnist Thomas Friedman fears is becoming “an illiberal bastion of zealotry.”

Instead of hardening US policy on Israel, Secretary of State Blinken has declared US military aid to Israel — $3.8 billion per year — to be “sacrosanct”.

Cohen cites Peter Beinart, the astute American commentator, for the conclusion that

when it comes to Israel, the Biden administration has no red lines, none at all.

Silence also prevails in Canada

Cohen writes of the “silence” over Israel from the Canadian Jewish establishment, critiquing the Centre for Israel and Jewish Affairs (CIJA) in particular. He writes:

Anger is left to progressive Jewish organizations, like JSpace Canada. Or, the highly effective, tightly focused New Israel Fund of Canada, which champions an open, liberal Israel. Both understand what’s happening here.

In his concluding comment, he wonders

just how much more they [Biden and Blinken] can tolerate of this [Israeli] descent of democracy.

Whither Canada?

Ceasefire.ca asks:

Occurring as it is, in plain sight, just how much tolerance does the Government of Canada have for Israel’s horrific “descent of democracy”?

The regional threat from Israeli extremists: Arab Digest podcast

For an enlightening discussion on the regional threat posed by Israeli government extremism, we are pleased to provide a link to an Arab Digest podcast, for which a subscription is not required.

Entitled Israel’s extremists and the threat they pose to Middle East security (soundcloud.com, 2 February 2023), it features a conversation with The Baker Institute’s Kristian Coates Ulrichsen and Arab Digest editor William Law about

the extremist regime [that] Benjamin Netanyahu has assembled and how far-right ministers like Itamar Ben-Gvir in provoking provocations against Palestinians are creating a dangerous new environment, a tinderbox which needs only a single spark to set off a major regional conflagration.

To listen to the podcast click HERE.

Photo credit: Wikimedia (2013 launch of campaign)

Ceasefire.ca is a public outreach project of the Rideau Institute linking Canadians working together for peace. We depend on your donations as we accept no funding from government or industry to protect our independence. Thank you for your support….  

Tags: Alexander Ward and autonomous weapons, Andrew Cohen and Israel's "descent of democracy", appropriate level of human judgement, Artificial Intelligence (AI), killer robots, meaningful human control, Ukraine, US Directive 3000.09