The UN Human Rights Council (UNHRC) has questioned the legitimacy and ethics behind so-called ‘killer robots’ – fully autonomous lethal weapons that campaigners say can go out of control.
The robots are being developed by the US, Israel and the UK and are programmed to operate on their own while on the battlefield – unlike drones, which need human commands to attack targets.
They have been severely criticised by the UNHRC in a report, in which author Christof Heyns says that the problematic question is whether the machines can be programmed to comply “with the requirements of international humanitarian law and the standards protecting life under international human rights law”.
He added, “Beyond this, their deployment may be unacceptable because no adequate system of legal accountability can be devised, and because robots should not have the power of life and death over human beings”.
The matter was discussed this week at a UN human rights convention in Geneva, Switzerland, before which leading human rights groups called for an end to the development of the robots.
“The UN report makes it abundantly clear that we need to put the brakes on fully autonomous weapons, or civilians will pay the price in the future”, said Steve Goose, arms director at Human Rights Watch.
“The US and every other country should endorse and carry out the UN call to stop any plans for killer robots in their tracks.”
Meanwhile Peter Asaro, founding member of the Campaign to Stop Killer Robots said, “To avoid future harm, states must take action now to stop the creation of weapons that would choose and fire on targets on their own without meaningful human supervision or control”.
Supporters of the weapons say the robots would reduce the number of soldiers killed on the battlefield, but opponents note that there is no guarantee that they will effectively avoid civilians.