A new open-letter signed by 116 founders of AI and robotics companies seems to have made quite a racket in the tech section of quite a few news sites (here and here). It follows on from an open-letter published a couple of years ago from AI and robotics researchers.
Unfortunately, Elon Musk (founder of Space X and Tesla) seems to have been attributed as the main person behind it. He isn’t. The person who put the hard work into getting all the signatures was Toby Walsh, professor of Artificial Intelligence at the University of New South Wales.
Toby Walsh, 2008. Photo by Avobronte. Used under CC3.0 License.
Although Musk is probably the biggest name in tech pop culture that is continuing to bang the drum about the dangers of AI, he has been criticised for his understanding of it (here and here [about 50:00] — and has criticised others). However, this letter is focussed specifically on autonomous weapons, rather than unlikely threats of self-aware AI taking over.
This letter was supposed to coincide with the first of two meetings this year of the ‘Group of Governmental Experts on Lethal Autonomous Weapon Systems’ at the UN in Geneva. But, this was cancelled (not delayed, as has been reported) because a number of countries hadn’t paid their UN bills. So the accounts for the Certain Conventional Weapons Convention (CCW), which campaigners hope to ban autonomous weapons under a new protocol of, are down by about $50,000.
When I was at the Informal Meeting of Experts on Lethal Autonomous Weapon Systems at the UN last year, the previous open-letter was referred to a number of times. Some states talked about it, but it was mostly in the speeches by campaigners against autonomous weapons. Most of these groups are part of the ‘Campaign to Stop Killer Robots’ coalition of NGOs.
Panaorama of the UN CCW Conference room, 2016. (Photo by Author.)
The fact that most of the references to the previous open-letter, and to speeches and writing against autonomous weapons is, of course, no surprise. But, the slim minority of states who have actually states they are against these systems does not bode well for a ban.
Most states have not made any rumblings towards a ban. Some are dead against it. Still, none have explicitly said they are totally in favour of such weaponry. This leads us to three real issues when it comes to the likelihood of a ban. The first is that all the campaigners in the world can support a ban, but if state parties to the CCW do not want a ban, it will not happen. So far only 19 state parties have called for a ban. This is out of 121 State parties to the Convention with a further five having signed but not yet ratified. So, the momentum of state parties is clearly not behind a ban.
Campaign to Stop Killer Robots meeting 2013. Used under CC2.0 license.
Secondly, it seems to me that although seemingly everyone involved in the debate accepts that there are significant dangers, risks and ethical issues associated with autonomous weapons, they could also be extremely useful in future conflicts. It’s the potential utility that seems to me to be the reason states do not wish to ban them. Should a full-scale conflict erupt between major (or even middle) powers, the ability to decimate enemy forces accurately, quickly, and with no physical risk to your own is hugely advantageous.
It might seem unlikely that such a conflict could ever happen, but major powers retain nuclear weapons to fight the unlikely (maybe almost impossible) conflicts with each other. So, the adoption of weapons with autonomy for unlikely future conflicts is not hard to imagine. Indeed, arguably autonomous weapons offer the ability to individually destroy multiple targets that might be prosecuted by a single nuclear bomb, thereby reducing collateral damage.
Master Gunnery Sgt. Joseph Perara guides a robot during the Department of Defense Lab Day at the Pentagon, May 14, 2015. Perara is assigned to the Marine Warfighting Laboratory. (US DoD photo by EJ Hersom.)
Thirdly, the argument for a ban often refers to the inability of the law of armed conflict (LoAC) to deal with autonomous weapons. But, there is no reason why it cannot. LoAC lays down certain requirements which fighters must adhere to. The main ones are not targeting civilians, doing everything feasible to reduce collateral damage, and not launching attacks that may cause disproportionate levels of collateral damage when compared to the military advantage to be gained. LoAC says nothing of how these must be done, so there is no reason an autonomous system cannot perform them in law. Technologically, everyone accepts that current systems cannot comprehend their environments, potential targets, or civilians to such a degree that they could recognise differences between civilians and human targets accurately. But, that does not mean that future systems could not do this. If a future system can comply with the legal requirements there is no problem with the law, there is only a problem with current systems that are inadequate. Any system that is inadequate would be unlawful to use, and so would never be deployed. So LoAC is providing sufficient protection to future civilians. Indeed, people who suggest this is still not enough civilian protection should probably start arguing for a renegotiation of LoAC, rather than against autonomous weapons specifically.
US 911th Airlift Wing law of armed conflict training, Aug. 11, 2012. (U.S. Air Force photo by Senior Airman Joshua J. Seybert/Released)
Consequently, I would think that a ban is unlikely to be negotiated by states. Even if states were to agree to the creation of an additional protocol on autonomous weapons, I wouldn’t expect it to be anything more than a minimal restriction of usage. Even if this were manageable, I’d expect it to be flouted if a high-intensity interstate conflict began between two autonomous weapon-armed states. So, the likelihood of a ban is very small. Even if an international legal instrument could be negotiated, I wouldn’t expect it to do much.
Until next time