AI has already been weaponized – that’s why we need to ban ‘killer robots’


A dividing line is rising within the debate over so-called killer robots. Many international locations wish to see new worldwide legislation on autonomous weapon methods that may goal and kill folks with out human intervention. However these international locations already creating such weapons are as an alternative attempting to spotlight their supposed advantages.

I witnessed this rising gulf at a latest UN assembly of greater than 70 international locations in Geneva, the place these in favor of autonomous weapons, together with the US, Australia and South Korea, had been extra vocal than ever. On the assembly, the US claimed that such weapons may truly make it simpler to comply with worldwide humanitarian legislation by making army motion extra exact.

But it’s extremely speculative to say that “killer robots” will ever be capable to comply with humanitarian legislation in any respect. And whereas politicians proceed to argue about this, the unfold of autonomy and synthetic intelligence in present army know-how is already successfully setting undesirable standards for its position in using power.

A collection of open letters by distinguished researchers talking out towards weaponizing synthetic intelligence have helped carry the talk about autonomous army methods to public consideration. The issue is that the talk is framed as if this know-how is one thing from the long run. In reality, the questions it raises are successfully already being addressed by present methods.

Most air defence methods already have important autonomy within the focusing on course of, and army plane have extremely automated options. This implies “robots” are already concerned in figuring out and interesting targets.