For years, humanity has toyed with the idea of sentient robots killing humans, whether in sci-fi movies or books. The fear that machines will surpass humans in violent ways has become more rampant than ever and the term “killer robots” is inching closer to reality.
But how close are we to inventing conscious machines and confronting the many existential threats that come with them? How far is the day where a robot will decide which one of us lives and which one is designated for extermination? To these questions, I am afraid that the answers are less than encouraging, perhaps even petrifying.
With emerging technologies, more and more of our lives are being outsourced to self-learning machines whether it is the assistant in our phones or the air conditioner that adjusts to our behavior. Even police authorities have started to rely upon robot dogs and the use of artificial intelligence based algorithms in the military is common too. But what do we have to say about the drones, the missiles and the guns that think and kill?
What exactly is a killer robot?
Why a “killer”, you may ask? By technical standards, such weapons are to be termed as “lethal autonomous weapon systems” or “LAWS” but with an interesting play on words, a report published by the Human Rights Watch popularized the term “killer robots” in a genius move to rally public support.
Unlike fearful graphical representations and movie references, not all machines with the potential to kill are designed to look like humans. A killer robot may be any machine that possesses the capability to take any lethal action or force against a human being out of its own discretion. When a human target is in the crosshair, the robot, not a human, makes the final decision for any action to be taken.
The International Committee of the Red Cross defines autonomous weapons as “any weapon system with autonomy in its critical functions – that is, a weapon system that can select and attack targets without human intervention”. Hence, in essence, any and all weapon systems that do not require an input from human operators to perform its key function, elimination of targets, is autonomous in its nature.
A simple example of such a weapon can be smart sea mines that have the ability to detect and differentiate between enemy and friendly ships using their AI-enhanced software. As soon as a ship is detected, the computer on the smart mine scans the acoustic and magnetic signal and chooses whether to let the intruder go or to destroy them. Another example is the proudly presented Harpy Drone designed and used by the Israeli military that can choose and destroy targets on its own once launched.
Automated or Autonomous – What’s the difference?
To understand the gravity of the situation and to deter misunderstandings, it is far more important to understand the difference between automation of weapons and their autonomy. This necessary distinction between the two will help the reader understand the ethical challenges associated with autonomous weapons in a much clearer way.
By definition, one of the most effective way to distinguish the two is to understand the relationships that these systems share with humans as they perform their tasks. An automated weapon may possess lethal capabilities but such capabilities depend solely the prerogative of a human operator behind a set of controls. In simpler words, the finger on the trigger for such weapons always is always flesh and bones.
On the contrary, autonomous weapons have an intrinsic capacity to determine, without human control or interference, when and against whom to use lethal force, defying the concept of pre-planned algorithms to emerge as “artificially conscious machines”. Such weapons will be fully capable of performing military operations on their own and at times, choose to kill human beings that the machine deems a threat.
Data decides life and death
Unlike humans who possess natural sensors such as the eyes, machines depend upon various mechanical sensors developed by their manufacturers to understand their surroundings. These may be cameras, built-in images processors, infrared and motion sensing equipment as well as other related sensors to determine when to perform their functions. For robots, everything around them is data.
Likewise, people are just a string of data that needs to be processed and responded to. When placed in front of autonomous weapons and killer robots, our facial and bodily features, personalities and behaviors are analyzed and are sorted into profiles. The robot then proceeds to make a decision based on our assigned profile; or in the case of weapons “Kill, Harm or Ignore”.
On paper, proponents of lethal autonomous weapons argue that such systems are not only morally acceptable but even suggest that they are ethically preferable. Experts like Ronald C. Arkin believe that unlike humans who can be clouded by emotions such as fear, hate or hysteria or overwhelmed by a flood of sensory information, killer robots can be far more “humane”. Their ability to take objective decisions regardless of their circumstances give them an immense edge over humans that often struggle to take immediate decisions.
However, who deems these decisions accurate? Do machines have the potential to recognize people as “people”; as living things that can be uncertain, unpredictable and full of sentiments? Do machines have the ethical concept of the value of a human life, or the consequences of their actions?
Are robots better soldiers and enforcers of law?
Experts have often pondered upon the notion of developing and introducing robotic soldiers in militaries and policing robots in law enforcement agencies , claiming that machines can perform such tasks much better than humans can. After all, a robot does not hate, it does not possess a sadistic urge to harm or kill, neither does it have the potential to rape anyone, nor commit a war crime. To many, robots are a step forward towards making wars and law enforcement more “humane“.
Are the fears of terminator-like machines descending upon their creators unfound and just our irrational fears? Let’s take a look at the other end of the argument, where experts have argued that artificial intelligence and weapons are likely to be inherently indiscriminate and incapable of complying with necessary international standards. Let us take a look at what potential barriers exist that bar autonomous weapons from becoming discriminate and legitimate.
Inaccurate machine perception
Robots are dependent upon complex hardware and related software to distinguish objects and to profile them. Even though technology continues to improve, it becomes very difficult for robots to distinguish between a real object from a shadow. Such difficulties continue to increase as the circumstances around the object grow complex, especially in the presence of countermeasures such as visual or electrical jamming equipment.
Beyond just the perceptions of shadows, robots cannot be currently expected to possess the ability of distinguishing legitimate and illegitimate targets as well, especially when facing situations where violent or delinquent activities are being targeted. A robot does not question as well, it will carry out all the orders with disregard to their legality. Within the tap of a button, all of us could be marked for elimination regardless of what we do or don’t.
Noel Sharkey, a professor of Artificial Intelligence and Robotics, argues that he can already imagine children being zapped because they offered an ice cream to a robot simply because a robot cannot process the situation.
The Framing Problem
No surroundings involving humans can ever be static or constant enough for a robot to ideally react to. Despite possessing far more processing capability than normal humans, increasing data from their surroundings can easily “confuse” robots, resulting in unpredictable results and outcomes.
Most soldiers would not, for example, blow up a school full of children if there is a sniper on its roof, but who knows what a robot would do.
Royal United Services Conference
Unreliability of Software
The software being used in any system must be reliable, safe and trustworthy. However, no software is perfect and none of it has any loyalties. The same software that is used to guide your electric car can be used to cause crashes. An army of killer robots is only a simple “hack” away from turning against the people that built them.
In such cases, how safe are the citizens that are subjected to the surveillance and protection of such ambiguous robots? With reports highlighting how machines have a bias against certain skin tones and inability to distinguish between facial features, the fact that they can be hacked as well does not inspire further confidence.
No legal accountability
Under international law, the fundamental value of upholding liability demands that all military orders follow a clear chain of command and no unmanned weapon systems fire their weapons without having a human operator in the loop. In the case of weapons and robots that can make their own decisions and choices, there is great likelihood of creating a legal liability vacuum where crimes committed cannot be punished and justice will be denied.
Any breach of law will either be directed to the manufacturers or designers of the autonomous weapon or killer robot. However, if the algorithm of the weapon in question is actively evolving and making its own choices, how can the burden be kept exclusively to the manufacturers? Or if the blame is to be shifted to the military or civil leaders that employ such weapons, how can the actions of a weapon be attributed to someone who did not possess the malicious intention to cause the crime but simply acted accordingly to the situation. With autonomous weapons, all culprits can escape liability for causing crimes.
At the same time, no human jurisprudence can be currently applicable to artificially generated consciousness, nor does such systems exist for laws to be implemented on. However, the question of legal accountability shall continue to be a dilemma as nothing with the potential to harm should exist without legal and moral accountability.
It would be a mistake to presume that the transfer of authority involves a simultaneous absolution of responsibility. It does not.
David Watson while talking about Artificial Intelligence and Weapons
Can we stop killer robots from becoming a reality?
What would happen if a learning-machine kills someone that did not deserve to die: who gets the blame? What would prevent artificial intelligence-enabled machines from taking over humanity and reenacting something from an overrated science fiction movie where all humans are exterminated?
While the flow of technology cannot be obstructed, the means to employ it can be somewhat restricted. Despite the fact that military investments in the domain of artificial intelligence and autonomous weapons are on the rise, organizations such as “Campaign to Stop Killer Robots” exist to rally support against the emergence of such autonomous weapons.
Recently, an international treaty to both prohibit and to restrict the development and deployment of killer robots has been endorsed by a multitude of countries and leading experts. However, much like other international treaties that cannot be enforced, there is little hope for compliance beyond signatures and hollow promises in legislative assemblies.
Where do we stand and where to go?
When an autonomous drone allegedly killed a person in Libya, as cited by this UN Report, history was made. For the first time in the history of humanity, inanimate objects and data dictated the destiny of a human – a matter of life and death. Allowing the proliferation of weapons that lack meaningful human control is not only an infringement of basic principles of international law but also a threat to human dignity as we’re reduced to meaningless data and objects. We, as citizens, must not only understand the fundamentals of autonomous weapons but also advocate for their restrictions or risk being analyzed, profiled, and potentially targeted.
Yet despite the concerns and outcry from experts, states still permit the development and deployment of killer robots that will one day devastate and change the world as we know, leaving only cold mechanic indifference and the clank of dehumanizing metal in attempt to make our conflicts more “humane”.
IVolunteer International is a 501(c)3 tech-nonprofit registered in the United States with operations worldwide. Using a location-based mobile application, we mobilize volunteers to take action in their local communities. Our vision is creating 7-billion volunteers. We are an internationally recognized nonprofit organization and is also a Civil Society Associated with the United Nations Department of Global Communications. Visit our profiles on Guidestar, Greatnonprofits, and FastForward.