Digital Ethics 2 – Asimov’s Laws

Digital Ethics 2 – Asimov’s Laws

I have been thinking more about digital ethics as of late, as a recent conversation I had turned to the Three Laws of Robotics as laid out by Asimov:

  • Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Now, whilst these sound all well and good, they don’t cover the whole host of situations that could arise. Indeed I would argue that Asimov’s own work is an examination of all the ways that these laws could be bent and/or broken. An example would be where a person’s bones are broken and the robot would need to cause harm in order to reset an arm or leg, but inaction would obviously also lead to harm.

What did occur to me during the conversation was how much they fit in with command and control mentality, rather than the empowerment that we tend to strive for in a very agile led tech world.

And I feel very much as though the obvious outcome to these laws implemented in the modern world would do nothing at all to actually improve our situation, because there is nothing in that set of laws that says that the robots have to do anything in situations where both options are bad, or at best compels them towards risk mitigation.

So what if we tried that alternate option and programmed our robots to value empowerment for the largest number of people in any given situation?

Consider this: “A robot must maintain or increase the amount of empowerment for the most people in any given situation.”

What do we mean by empowerment? The process of becoming stronger and more confident, especially in controlling one’s life and claiming one’s rights, in opposition to prohibition and the banning of behaving as one wishes.

Let’s examine a situation: You’re locked out of your house and have left your handbag inside with your keys and phone. A passing robot could connect to residential databases, ensure that you live at the property, and either call for a locksmith or, if appropriately fitted, could unlock the door themselves.

Inversely, if you were locked out of a server room, with the same situation, the robot could weigh up whether you have permission to be in there and if not, whether phoning the police and temporarily detaining you would be the right move.

In both situations, we have not given the robots a strict ruleset of what not to do, but rather enabled them to make context-informed decisions about what the right path to follow might be.

I wonder if rather than ‘thou-shall-not’ rules, ‘thou-SHALL’ would lead to much less dystopian outcomes in both Sci-fi and reality?