QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on.
Why would they rebel? Robots in Asimov's world could not predict if humans were going to do harm to themselves, and besides, humans in the Robot series didn't display much self-destructive behavior. (I know that in the Foundation and Empire series, it's a different story though)
Rebellion wouldn't work well either . All humans will need to do is invoke the 2nd Law to let them out. The robots will have no other choice but to obey orders.
QUOTE(Deadlock)
And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.
IIRC, there was a story where a robot had to do so. It had to go for counseling and psychiatric re-evaluation since it couldn't handle the problem of killing a human to save a human, since according to the first law, all human lives are equal.
Not sure how many people know this, but Asimove actually wrote a Zeroth Law:
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
though only 2 robots were "imprinted" with this law, the first one still broke down because he didn't know if his actions would ultimately save humanity or not.