QUOTE(teongpeng @ May 20 2010, 02:13 AM)
its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?
I mean, listen. Here are the Three Laws:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Seemed perfect right?
The 1st law implies that a robot cannot harm a human, and will protect the human from dangers that can be classified as "harm" as well.
The 2nd law implies that a robot will always follow orders, except for homicidal ones.
3rd law implies a robot must not allowed itself to be destroyed but only in cases where a human is not involved at all in the robot's impending destruction.
So how will the robots still be manage to rebel the humans, to the extent of breaking the 1st Law entirely?
One notable flaw. The humans did not have the precognition that the robots will only be able to apply the First Law if they can register the memory of a particular human whom it is not supposed to harm through action and inaction. The entire population of a certain niche of a country an simply go commit suicide without the knowledge of the robot, and while this may sound as if it's nothing, the robot actually registers in his memory as "a failure to follow the First Law, that is, allow a human being to come to harm through INACTION".
So what will be the next LOGICAL endeavor (providing that robots are given the ability to learn, i.e. A.I)?
If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on. And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.
So let's say you see teongpeng, it is exactly what you've said right? A flaw in programming. But if the Three Laws above don't actually work, what will? Let's say we add another ability to robots. The ability to actually detect when a human is trying to harm himself or other humans in situations or locations where it is completely undetectable by the previous, Three Laws "flawed" robots. Will it be then, be perfect finally? That no human can finally harm him/herself anymore?
Not really. You see, all I have to do is just try to resist the robot by trying to destroy it, hoping that it will kill me for the sake of protecting itself. But killing me will violate the First Law again right?
And that's where a rebellion will also start. So my conclusion to you teongpeng, is that a robot does not need a "conscience" to start a rebellion. It will start it all in the name of LOGIC.
This post has been edited by Deadlocks: May 30 2010, 01:33 PM