Six French tech experts are among those signing an open letter that paints a picture of weaponised robots turning against humans and going out of control: permitting armed conflict “at timescales faster than humans can comprehend” and falling into the hands of “despots and terrorists” to “use against innocent populations”.
In technical terms, such lethal weapons are one result of the convergence between robotics and artificial intelligence (AI) technology.
“When we talk about modern autonomous weapons, we mean weapons that are able to proactively take a decision, using algorithms and artificial intelligence to select and engage targets without any human intervention or even supervision,” says Tudor Djamo-Mitchell of Paris-region AI firm Spoon, one of the signatories of the letter.
“We’re building autonomous systems that are able to execute a target without the aid of a human,” says Daniel Hulme, another signatory and founder and CEO of British artificial intelligence firm Satalia. “The concern is that if we introduce artificial intelligence into these types of autonomous weapons, then we can’t predict what they decide is a target.”
While drones and weaponised robots are already used in warfare today, the technology risks speeding up faster than anyone can predict.
“When you open this box of lethal autonomous weapons, it’s going to evolve at the same rhythm that everything in information technology is evolving, doubling in speed and being divided by two in size every year,” warns Raphaël Cherrier, founder and CEO of Qucit AI firm in Bordeaux.
Is a global ban possible?
While the letter calls on a United Nations body called the Convention on Certain Conventional Weapons to establish a group tasked with preventing an arms race, even those who signed it are unsure how effective a global ban could be.
“If you think about nuclear proliferation, you can control the enrichment of plutonium, because it’s done in big industries, but if you think about autonomous weapons, there’s a robotic part and a software part, and both can be very easily spread around the world,” Cherrier says. “How is a global ban going to help, I don’t know, but at least the problem should be raised.”
“For me, signing this letter is as much around raising awareness as anything else,” says Daniel Hulme. “The exponential growth of artificial intelligence systems is making people wake up and realise that they can have a very positive impact on the world or a very negative impact, and that there’s a critical point where we could lose control over these systems, and that that could happen in the next decade.”
Tudor Djamo-Mitchell, who deals with philosophical and ethical questions in his firm, believes opening debate on questions of responsibility would lay the groundwork for an international legal framework for lethal autonomous weapons.
“Whenever you create this type of weapon, you are not sure of who is responsible for the casualties. Is it the designers of the algorithms or the people who are supervising its use, or any?” he asks, adding there is another dimension.
“When public opinion judges something like an assassination for example, there is an ethical need to be able to relate to the person who has taken the decision, and when this person becomes an algorithm or an artificial intelligence, all responsibility but also all possibility of understanding the action disappear.”
This UN body was actually to meet Monday, but the meeting has been postponed until November, prompting a line in the letter warning that there is not long to act, saying “once this Pandora’s box” of killer robots “is opened, it will be hard to close”.