Discussions resume in Geneva Tuesday on lethal autonomous weapons systems
Pitted against the glacial pace of the UN’s discussion process, activists hoping for an international ban on killer robots have repeatedly been left fuming and frustrated.
Pitted against each other in the battlefield, lethal autonomous weapons systems — or LAWS — could in short order cause “absolute devastation,” according to one of those activists.
That scenario, says the activist, Prof. Noel Sharkey of the University of Sheffield, isn’t as farfetched as it might have been even five years ago, when he helped found the Coalition to Stop Killer Robots, a group of 64 NGOs dedicated to the cause.
And it’s that belief that brings him and other academics, scientists and activists back to Geneva this week to yet more discussions involving more than 80 countries.
Their hope is that the UN process moves from discussion to formal direct negotiations by next year to produce a pre-emptive treaty banning killer robots by 2019.
The activists’ chief concern isn’t the military’s delegation of tasks to autonomous machines — which can be useful in search and rescue and bomb disposal and myriad other tasks too dangerous or too onerous for humans.
Instead, the coalition and others pushing for a treaty specifically want to ban LAWS with the “critical functions” of selecting a target and then killing, without meaningful human control.
“I think it’s very urgent that we do this now,” says Sharkey, describing the UN process as “frustrating.” Countries that don’t want a ban just keep slowing it down, he says.
“Our mandate is to get a treaty for emerging weapons … so if they slow us down long enough, they’ll have emerged and we’ll have no chance.”
Thus far, no fully autonomous weapons are known to have been unleashed in the battlefield, although the development of precursors is well underway, with growing degrees of autonomy and intelligence — even the ability to learn.
In this video below, the Coalition to Stop Killer Robots make its case to ban autonomous weapons.
Recently, such development has stirred controversy. At Google, staff wrote an open letter last week to management demanding they suspend work on a U.S. military project that involved drones and artificial intelligence capability.
And also last week, dozens of scientists and academics wrote a letter to the Korea Advanced Institute of Science and Technology in Seoul threatening a boycott for a project developing artificial intelligence for military use. The university has since promised it would not produce LAWS.
Still, Sharkey goes as far as describing what is happening now as a new arms race underway as militaries and companies compete to acquire increasingly autonomous and smarter weapons.
Since the UN discussions started back in 2014, lightening-fast advances in the fields of robotics and artificial intelligence have made it possible to build LAWS in short order, according to experts.
Beyond science fiction
“You could build an autonomous weapon system with open source technology now — the question is if it’s good enough to meet our standards as advanced nations,” says Ryan Gariepy, CEO of Clearpath Robotics, a Canadian firm that was the first company to endorse a ban on killer robots.
So the near future, says Sharkey, could see wars starting automatically with battlefields too fast for the pace of human decision-making, where “war starts automatically, 10 minutes later, there’s absolute devastation, nobody’s been able to stop it.”
That’s most dangerous of all, he says.
“I’m not talking about science fiction here. I’m not talking about AI [artificial intelligence] suddenly becoming conscious,” he said in an interview.
“I’m talking about stupid humans developing weapons that they can’t control.”
There are ample examples out there of the growing role of autonomous functions in military and policing.
Put aside for a moment the Terminator idea of human-like soldiers and consider the Samsung Techwin SGR-A1.
It patrols the South Korean border and has the ability to autonomously fire if it senses an infiltrator. Right now, it prompts an operator first.
Or what about the Russian semi-autonomous T14 Armata tank or the British BAE Systems’ Taranis aircraft, both human-controlled but both also capable of semi-autonomous operation. Kalashnikov has also built some prototypes with “neural networks” modelled on the human brain.
The U.S. military tests relentlessly. The Pentagon has experimented with swarms — drones made to learn to think and react collectively.
Though no country admits actually pursuing LAWS, proponents have several arguments in their favour: they could make wars more efficient and accurate, lowering costs as well as civilian casualties. Delegating killing to machines could ultimately also spare soldiers any moral consequences of killing, even in self-defence, such as PTSD.
“It will spare the lives of soldiers, it will spare conscience of soldiers, it will spare soldiers from the threat of suicide,” said Prof. Duncan MacIntosh, a Dalhousie University philosophy professor who is a leading adviser on the ethics of autonomous weapons. He made the comments in his openings statement during a debate with Sharkey at St. Mary’s University in Halifax last month.
“You can make sure for instance that a machine will not kill from fear, anger lust, revenge, political prejudice, confusion, fog of war.”
Automation used in the “right way could make war more precise and more humane,” says Paul Scharre, a former U.S. Army ranger who is currently at the Center for a New American Security.
Is that a real weapon?
That doesn’t help things “for actors that don’t care about civilian casualties, or trying to kill civilians,” says Scharre. But for militaries who “care about humanitarian law and avoiding civilian harm, these technologies can allow them to be more precise and distinguish better between enemy and civilians.”
For example, AI-enabled systems could be used to tell whether someone is carrying a weapon or something that just looks like a weapon.
“We absolutely could do that. In fact we know that we can build machine learning systems today that can identify objects very well and actually beat humans at some benchmark tests for image recognition,” says Scharre, whose book Army of None: Autonomous Weapons and the Future of War, comes out this month.
For Scharre, it’s too soon to call what is happening now an “arms race.”
He agrees the unchecked proliferation of autonomous weapons should be avoided, but that finding global agreement on a definition of meaningful human control of autonomous weapons is preferable to an outright ban. The U.S. position is to work within existing laws.
Activists say several countries interested in autonomous weapons, such as Russia, China, Britain and Israel, are resistant to an outright ban. Some accuse some of those countries of obfuscation and footdragging — and quibbling over definitions — in UN meetings to prevent progress towards a treaty.
Some officials have insisted you can’t ban something that doesn’t exist, to the exasperation of activists.
“Can we afford incremental movement forward, as technology spirals to God knows where?” asked Canadian Nobel laureate Jody Williams in a statement in 2016.
There are many calls for a killer robot treaty similar to that Williams helped orchestrate to produce a global ban on anti-personnel landmines back in the 1990s — and for Canada to lead the way.
“Canada has already played past leadership roles, most significantly in the control and banning of landmines. I think there’s a very similar role that Canada can play in this discussion as well,” says Gariepy.
Canada is also being pushed to articulate a clear position on killer robots and back the ban.
Compromise instead of treaty?
France, now apparently supported by Germany, is in favour of a compromise that sees a political declaration and international law as preferable to a new treaty. Activists criticize the two countries for failing to stand behind a ban that even Germany’s Angela Merkel once said she supported.
Some 22 countries support an outright ban.
Signatories to the UN’s Convention on Certain Conventional Weapons (CCW) held three conferences on the topic of killer robots before appointing a Group of Governmental Experts to discuss them. The first meeting of that group was held last fall.
This week’s meeting is one of two planned for this year.
“There will be some sort of agreement,” says Gariepy, who has attended the UN meetings in the past.
But whether that agreement is in place before these weapons begin to proliferate, and whether it “actually addresses the need to have meaningful human control … is an open question.”
In a letter ahead of this week’s meeting, the Coalition to Stop Killer Robots stressed “the window for credible preventative action in the CCW is fast closing.”
With files from Nadim Roberts and Megan McCleister