Over-hyped, catastrophic visions of the imminent dangers from ‘killer robots’ stem from a genuine need to consider the ethical and legal implications of developments in artificial intelligence and robotics when they are applied in a military context. Despite this, international agreement on the best way to address these challenges is hampered, partly by the inability to even converge on a definition of the capabilities which cause concern.
One of the biggest issues is ensuring that human operators retain control over the application of lethal force, but even taking this specific-sounding approach means that a number of ways that control could be lost are grouped together. A machine gun triggered with a heat sensor would be a misuse of simple automation. Systems which overload human operators with information and don’t give them sufficient time to make decisions could be considered technically impossible to control. The human crew responsible for deciding the targets of a remotely piloted aircraft might feel sufficiently disassociated from their decisions to lose effective control of them.