There were stories where the Robots were programmed with restricted definitions of human. That affects the First Law.
Asimov compared his three laws to tools:
1) A tool must not be dangerous to its user
2) A tool must perform its function
3) A tool must be reliable
We would want an automated drill to stop if there is a person in its way, but as long as the work area is clear, we want the drill to do its job. And we want it not to break down, but that's subordinate. We might need to use the tool even if it breaks it, depending on how urgent the task is. Asimov's Laws are essentially the same idea, we want the Robot not to hurt people, to obey orders so as it can do so without hurting people and to protect itself so long as doing so doesn't harm people or violate orders.
There are tricky things with robots, you probably don't want the robot obeying just anyone's orders. Otherwise, people would order the robot to come home with them. The robot would have to come out of the box prepared to take orders from only its owner (and from customer support). The first law gets tricky. You wouldn't want the robot to restrict voluntary actions, a robot which forbade humans from playing sports wouldn't be a good robot. The Zeroth Law gets really problematic. It is difficult to imagine the horror of the Black Death. But those who survived found that their labor was suddenly much more valuable, and the status of peasants improved. Suppose they had robots that could cure the Black Death. With the Zeroth Law, the robots could decide that humanity was better off with the Black Death than without it. The First Law would require the robots to provide the cure. The Zeroth Law would allow them to withhold it. We might want a robot to interfere with a mugger, even if it hurts the mugger, but we probably wouldn't want a robot to be taking such big matters into their own hands. We benefited from the Black Death, but to choose the Black Death would be horrific.
|