Skip to main content

Autonomous Robots - Friend or Foe?

June 19, 2018 | Expert Insights

Some of the world’s leading robotics and artificial intelligence pioneers are calling on the United Nations to ban the development and use of killer robots. However, what is an autonomous robot and how much of a threat are they to us? 

Background 

The oldest automatically-triggered lethal weapon is the land mine, used since at least the 1600s, and naval mines, used since at least the 1700s. All anti-personnel mines are banned in many countries by the 1997 Ottawa Treaty

In July 2015, over 1,000 experts in artificial intelligence signed a letter warning of the threat of an arms race in military artificial intelligence, calling for a ban on autonomous weapons. The letter was presented in Buenos Aires at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed by Stephen HawkingElon MuskSteve WozniakNoam ChomskySkype co-founder Jaan Tallinn and Google DeepMind co-founder Demis Hassabis, among others. 

However, weapons systems being developed today tread on the thin line between Automated and autonomous. The NBS MANTIS short-range force protection system will detect, track and shoot the projectiles within a close range of the target base. The German Army will be the first military in the world to possess such defence against aerial threats. 

Analysis 

Simply put, there are three different types of control over Robots- Open Loop- a machine that does not use feedback (for instance, a washing machine that can be turned on to perform its function following which it switches off), Closed Loop/ Automated- a machine that uses feedback (for instance, a thermostat that takes information from its surroundings and uses this to regulate temperature) and Adaptive, where the control system ‘learns’ and adjusts itself. It is in this last category under which Lethal Autonomous Robots would operate. 

There are 4 main ethical questions raised by the possibility of Lethal Autonomous Robots (LARs). 

To start with, Autonomous Weapons cannot make the complex ethical choices on battlefields since they lack human judgement and wouldn’t understand context.

Secondly, from a military perspective, replacing human troops with machines makes the decision to go to war easier which shifts the burden of armed conflict onto tax paying civillians. 

Thirdly, the accountability gap. If something goes wrong and a robot goes haywire and causes damage, who does the blame fall on? The commander, the programmer, the manufacturer or the robot itself?

Fourthly, and perhaps most dangerous, is the risk of redesign. Stephen Hawking had said that Machine Intelligence would surpass biological intelligence and there would be an existential risk arising with that transition. Once humans develop Artificial Intelligence to a certain degree, it would redesign and update itself until it outstrips Human Performance. 

Science fiction Writer Isaac Asimov had stated three laws that a robot had to abide by- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law: A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. A Lethal Autonomous Robot by its very definition would go against these Laws. However, the turbulent geopolitics of today may well call for the sacrifice of these ethics for the greater good. 

Counterpoint 

Having established the cons of LARs, the reason they are still being considered for production is because of how precise they are. Once a drone or Lethal Autonomous Robot is programmed to eliminate a specific target, it will make certain that it only attacks that one target. The major advantage of such a weapon would be to minimize collateral damage on the battlefield and ensure the safety of civilians and hostages in areas controlled by non-state actors. 

Assessment 

Our assessment is that Autonomous Robots may be the answer to many of the crises that the world currently faces. A robot is meant to perform repetitive, hazardous or unsanitary jobs that a human would usually not perform. However, the question of whether a robot should be given the authority to potentially spare or take a human’s life is an ethical dilemma that must be resolved before proceeding with the development of such weapons.

Read more - Robots to take over?

Killer Robots

Rise of robots