Asimov's three laws are well beyond our ability to implement. To obey the first or third laws requires that the robot be able to recognize when harm is taking place or is about to take place. It then requires the robot to recognize what steps to take to correct or prevent the harm.
The second law, getting the robot to understand what you want when you give it an order is tricky enough as it is.
I didn't think I, Robot was a great movie, but it did seem a possible outcome of the Zeroth Law of Robotics, "A robot may not harm humanity, or, by inaction, allow humanity to come to harm." Once you allow the robots to violate the First Law, pretty much anything goes. The robot would free to do anything that it thinks will prevent harm to humanity.
Nightfall, that was a couple hours I wish I could surgically remove from my brain.
Last edited by QuantumIguana; 09-21-2012 at 06:56 PM.
|