0 Comments

The day Dr. David Dao was dragged off of a United plane so United employees could get to their next destination was significant for another, possibly longer-lived reason that the sheer brutality it portrayed. Those familiar with the science fiction genre and more specifically Isaac Asimov will have heard of his three laws of robotics. These laws were created as a plot device but their validity in our time of increasingly sentient AI is becoming more apparent. Elon Musk and Stephan Hawking are warning us about advancing AI. AI is making its way into our lives at a decent pace and so far, we've been very accepting of it.

The Dao incident however may be the first time the 1st law of robotics was broken. For those not familiar with these laws , I'll list them here.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In our current time, the 3rd law doesn't yet apply - there is no way for a robot to protect itself, except possibly in advanced research facilities where walking, running and rolling robots are the norm.

In the case or Dr. Dao, United employees first asked for volunteers to deplane. When no one volunteered, they said they'd let the computer pick four seats that people would have to vacate. This step has a human asking a machine to pick four other humans at random. At this point, only the United employees know what will happen to passengers that refuse to deplane, based on company training. When Dr. Dao refused to deplane, United called police to “assist him” to the floor of the aircraft and drag him off. So, a computer unwittingly aided a human decision to harm a human thereby breaking the first law of robotics.

You may find this point a bit of a stretch, but I'll argue that even if it is, the actual, un-stretched case is not too far off. This is because of an even more striking human behavior in this situation.

United asked for volunteers to deplane. When they didn't, a United employee said they would have a computer do it - removing their culpability and ascribing it to the “computer” and in Thurman Merman's words from Bad Santa – “so it wouldn't be [their] bad thing”. The fact that a human was easily moved to allow a computer to decide random passenger's fate is troubling.

This attribute (defect?) in humans is well known and has been shown experimentally. In another experiment, when one randomly selected group of individuals (who have done no wrong) is imprisoned and the other half of the group become the imprisoners, the results are startling. In this case, United employees would certainly feel like they “own” the plane, because they have been told they have control over the passengers in certain situations. Even though the passengers are not doing anything wrong, aspects of this behavior are obvious. With so few emergency situations on planes, (thankfully) an outlet for employee's training is not readily available and this builds a level of stress.

Because humans will act this way, and because AI is becoming more and more prevalent, we have a compelling argument for implementing robotic laws – or in a more modern frame of reference – AI safety.