South African legal scholar and United Nations special rapporteur Christof Heynes once wrote: "While earlier revolutions in military affairs gave the warrior control over ever more powerful weapons, autonomous weapons have the potential to bring about a change in the identity of the decision-maker. The weapon may now become the warrior."
While the ADF may not appreciate being termed "warriors", the sentiment in this quote eloquently captures the changed circumstances we now find ourselves in. Not just the changing strategic environment, but the weapons being developed within it.
At a recent roundtable at the ANU, Australia confirmed it is working on autonomous weapons development, and defence capabilities against possible future lethal autonomous weapons system (LAWS), and reaffirmed Australia's position that a treaty to ban such technologies would be premature as there is no agreed definition of the technology.
This topic is plagued by definitional muddiness. There is no consensus on the meaning of autonomy, although last November Defence helpfully produced a Concept for Robotic and Autonomous Systems. It contains diagrams that delineate a spectrum of increased autonomy, from remotely operated systems such as bomb disposal where there is full human control, to autonomous systems that may make the decision to use lethal force (independently of human input). It is this latter end of the spectrum that is cause for alarm - indeed the development of fully autonomous weapons systems has a civil society campaign dedicated to stopping it, the Campaign to Stop Killer Robots.
The arguments in favour of developing fully autonomous systems include that they reduce risk and as such will save lives, and that adversarial machines will make decisions far quicker than a human could ever respond to. The arguments against include that removing such risks will make war or accidental escalation more likely, that automated machines make mistakes, and that there is a risk of violent non-state actors getting hold of them.
The ethical, technological and legal implications of fully autonomous systems are considerable. Think Terminator movies. The ethical considerations include the question of whether we should be developing weapons systems that delegate the decision to use lethal force to a machine. Surely being killed by a machine whose algorithms calculated a decision to kill a human target denigrates human dignity and eviscerates our humanity? And who is responsible when they get it wrong? This is not a luddites v innovators issue - no one would argue a robot is far better to deploy in bomb defusion than anything living. The issue is around the decision to take human life, and potentially at scale.
To wait for an agreed definition is understandable, however there are norms that could be abided by and promoted now, and pleas coming from the UN secretary-general and over 4000 technologists to not let this genie out of the bottle. There is no time to waste. Already, Turkish and Israeli drones have been deployed into conflicts in the Middle East, some using facial recognition, and in at least one strike in Libya, a drone fired on and killed civilians. Russia is developing an unmanned nuclear-powered 20-metre-long torpedo to carry both conventional and nuclear warheads.
There are clear strategic, operational and tactical advantages to putting autonomy into the battlespace, and we can trust, or not, technology to do many things. But should we really be trusting it with the decision to kill?
- Stephanie Koorey is currently based at the ANU College of Law, and has researched arms control and human rights for over 25 years.