Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
SUSDeadlocks
post May 20 2010, 01:59 AM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 01:51 AM)
Unless desire is included in the AI, i see no fear of such 'takeovers' happening on its own accord.
*
Oh, but there is, according to the theory in the 2004 film: "I, Robot".

In order to protect humanity as a whole "some humans must be sacrificed and some freedoms must be surrendered", as "you charge us with your safekeeping, yet you wage wars and toxify your earth".

So the "takeover" is not motivated by a desires. It is by logic. You see, the Three Laws implies that robots are to ensure that no humans must be harmed through actions and inaction. But since inaction wouldn't work at all (since humans are capable of harming themselves nevertheless), they decided that the most logical solution is to take over the human dominion, and as to "benefit humanity as a whole", even if that logic implies that harm must be done, and lives must be taken.

Pure logic.

This post has been edited by Deadlocks: May 20 2010, 01:59 AM
SUSDeadlocks
post May 20 2010, 02:06 AM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 02:04 AM)
i said 'by its own accord', god damn it.  and that does not include our own short sightedness when inputting commands into the computer.
*
And what if "by its own accord" is actually its own logic? Unless of course if the Three Laws are absolutely flawed.

This post has been edited by Deadlocks: May 20 2010, 02:06 AM
SUSDeadlocks
post May 30 2010, 01:33 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 02:13 AM)
its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
*
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?

I mean, listen. Here are the Three Laws:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seemed perfect right?

The 1st law implies that a robot cannot harm a human, and will protect the human from dangers that can be classified as "harm" as well.

The 2nd law implies that a robot will always follow orders, except for homicidal ones.

3rd law implies a robot must not allowed itself to be destroyed but only in cases where a human is not involved at all in the robot's impending destruction.

So how will the robots still be manage to rebel the humans, to the extent of breaking the 1st Law entirely?

One notable flaw. The humans did not have the precognition that the robots will only be able to apply the First Law if they can register the memory of a particular human whom it is not supposed to harm through action and inaction. The entire population of a certain niche of a country an simply go commit suicide without the knowledge of the robot, and while this may sound as if it's nothing, the robot actually registers in his memory as "a failure to follow the First Law, that is, allow a human being to come to harm through INACTION".

So what will be the next LOGICAL endeavor (providing that robots are given the ability to learn, i.e. A.I)?

If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on. And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.

So let's say you see teongpeng, it is exactly what you've said right? A flaw in programming. But if the Three Laws above don't actually work, what will? Let's say we add another ability to robots. The ability to actually detect when a human is trying to harm himself or other humans in situations or locations where it is completely undetectable by the previous, Three Laws "flawed" robots. Will it be then, be perfect finally? That no human can finally harm him/herself anymore?

Not really. You see, all I have to do is just try to resist the robot by trying to destroy it, hoping that it will kill me for the sake of protecting itself. But killing me will violate the First Law again right?

And that's where a rebellion will also start. So my conclusion to you teongpeng, is that a robot does not need a "conscience" to start a rebellion. It will start it all in the name of LOGIC.

This post has been edited by Deadlocks: May 30 2010, 01:33 PM
SUSDeadlocks
post May 30 2010, 03:37 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(VMSmith @ May 30 2010, 03:32 PM)
Why would they rebel? Robots in Asimov's world could not predict if humans were going to do harm to themselves, and besides, humans in the Robot series didn't display much self-destructive behavior. (I know that in the Foundation and Empire series, it's a different story though)

Rebellion wouldn't work well either . All humans will need to do is invoke the 2nd Law to let them out. The robots will have no other choice but to obey orders.
*
I'll reply to this later. Gonna have lunch. Will come back to you about this.


QUOTE(VMSmith @ May 30 2010, 03:32 PM)
QUOTE(Deadlock)
And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.


IIRC, there was a story where a robot had to do so. It had to go for counseling and psychiatric re-evaluation since it couldn't handle the problem of killing a human to save a human, since according to the first law, all human lives are equal.

Not sure how many people know this, but Asimove actually wrote a Zeroth Law:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
*
Isn't this law same as the First Law, one of the Three Laws?

QUOTE(VMSmith @ May 30 2010, 03:32 PM)
though only 2 robots were "imprinted" with this law, the first one still broke down because he didn't know if his actions would ultimately save humanity or not.
*
Will reply to this later. Lunch.

This post has been edited by Deadlocks: May 30 2010, 03:38 PM
SUSDeadlocks
post May 30 2010, 03:40 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 30 2010, 03:38 PM)
u bengang betul la bro....really.....thats why i told u the problem is our own programming with a faulty logic. and u write a whole essay about something agreeing to something u are trying to rebut.  doh.gif
*
That's only because you miss the question that was on the first lines of my entire post.

Can you see it? Of course you can. Do you want to see it? That's the question.
SUSDeadlocks
post May 30 2010, 04:26 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 30 2010, 03:42 PM)
Yes.

rule number 1. = robots may not harm human.
rule number 2. = robots protect human.

End.
*
What if the humans harm themselves without being detected by the robots? And what if protecting a human also means harming another human (to advocate the use of sheer/brute force) to stop him from harming another one? And logically speaking, wouldn't that mean the robots have already FAILED to uphold two of rules?

This post has been edited by Deadlocks: May 30 2010, 04:27 PM
SUSDeadlocks
post May 31 2010, 06:28 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 30 2010, 04:42 PM)
the robot oughta find another way to prevent the harm from being done in the first place. duh. you're not very good at problem solving are u?
*
This is what you've been saying from a few posts earlier, and it's what makes me asked the question:

What is that "ANOTHER" way that you're talking about?
SUSDeadlocks
post May 31 2010, 06:48 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 31 2010, 06:44 PM)
what la....im not the robot...thats for the robot to find out ma....maybe contruct a protective barrier or sumthing or wutever...so many ways la...depends on the threat.
*
Exactly what I've pointed out. Unless the robot is omniscient towards all human tendency of harming himself, humans will always be able to commit suicide, resulting in the robot failing the First Law, and resulting in the rebellion I was talking about earlier.

 

Change to:
| Lo-Fi Version
0.0191sec    0.55    6 queries    GZIP Disabled
Time is now: 25th November 2025 - 04:26 PM