OPINION
Subscribe now for unlimited access.
$0/
(min cost $0)
or signup to continue reading
One of the most contentious topics in future warfare is whether lethal autonomous weapons (LAWs) should, independently of human involvement, be enabled to offensively engage targets, and even kill humans, based on pre-programmed descriptions and constraints.
There are some obvious situations in which autonomous systems might be used. For example, one might be protecting sensitive proscribed-entry sites - such as nuclear facilities, from intruders and terrorists.
Another might be neutralising an incoming kinetic or cyber attack because a human would not be able to make a decision quickly enough to prevent it from succeeding.
Less controversially, autonomous offensive drones and Unmanned Combat Vehicles (UCVs) could in the future prove attractive to Western defence forces for tasks that are difficult for humans to maintain or conduct.
These could include tasks that are manpower-intensive 24 hours a day, exceed human physical limitations (such as high G-forces, environments lacking in oxygen, or tasks of long continuous duration), are high-risk for humans, or roles where blue force casualties are likely to be high; such as high-intensity urban warfare against terrorists or insurgents.
Russia and China are already deploying weapon systems that can act autonomously (such as those based on unmanned underwater vehicles), but they require a human command to activate them to engage targets.
Another less controversial future application of autonomous systems could be the use of proactive autonomous 'robots' for casualty recovery on the battlefield, mine clearance, surveillance and reconnaissance, and a host of other defence applications where machines could reliably emulate or out-perform humans at no risk to blue force personnel.
- Clive Williams is a visiting professor at the ANU's Centre for Military and Security Law and former Defence Intelligence Organisation senior arms control analyst.