We all know the three rules that govern all robots:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
This is great for what we think of as robots. Usually; a body, a head, some arms and legs. They will walk around, working with their human counter parts. Doing construction work where parts are too heavy for humans. They will make sure a person does get in harms way where they can avoid it. They will of course make sure no harm comes to themselves either.
Two fine details get in the way with the three rules perfect plan.
What if a robot doesn’t know something will harm a human. If a robot is told something is good for humans or the robot doesn’t understand the device it is in control of, harm could happen because there isn’t an understanding that the rules apply. This isn’t the case with the movie The Terminator reference.
Where the harm will actually come from is computer programs. There is no rules of good conduct built into software. A program has not appreciation of negative impact to a human. It could be told that a certain outcome is bad, but there is no reasoning that an action could lead to harm. I thought of this tonight when 60 Minutes was covering computer programs that buy/sell stocks in a fraction of a second without knowing anything about a company, it’s leaders or employees. The program only knows that there is a movement that matches a pattern which has an expected action to be taken. I doubt most stock brokers are worried about keeping an employee of a company happy, if the CEO is spending personal time wisely, but it plays into how the company stock is bought and sold. A computer has no concept that in it’s program to make money, employees will loose jobs effecting crime rates and possibly creating harm to humans. This is just a small example. The point being, risk comes from outside of the robots that resemble humans with their physical duties, rather the bits and bytes that control what makes the world go around.