Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
teongpeng
post May 20 2010, 01:51 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


Unless desire is included in the AI, i see no fear of such 'takeovers' happening on its own accord.
teongpeng
post May 20 2010, 02:04 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 20 2010, 01:59 AM)
Oh, but there is, according to the theory in the 2004 film: "I, Robot".

In order to protect humanity as a whole "some humans must be sacrificed and some freedoms must be surrendered", as "you charge us with your safekeeping, yet you wage wars and toxify your earth".

So the "takeover" is not motivated by a desires. It is by logic. You see, the Three Laws implies that robots are to ensure that no humans must be harmed through actions and inaction. But since inaction wouldn't work at all (since humans are capable of harming themselves nevertheless), they decided that the most logical solution is to take over the human dominion, and as to "benefit humanity as a whole", even if that logic implies that harm must be done, and lives must be taken.

Pure logic.
*

i said 'by its own accord', god damn it. and that does not include our own short sightedness when inputting commands into the computer.

teongpeng
post May 20 2010, 02:13 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 20 2010, 02:06 AM)
And what if "by its own accord" is actually its own logic? Unless of course if the Three Laws are absolutely flawed.
*

its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
teongpeng
post May 20 2010, 04:57 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(robertngo @ May 20 2010, 09:15 AM)
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do.   icon_question.gif
i know..

a self learning computer can be dangerous. But unless they have desires, there is very little to fear about AI world domination. Well unless its programmed to do just that.

This post has been edited by teongpeng: May 20 2010, 04:58 PM
teongpeng
post May 20 2010, 10:01 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Beastboy @ May 20 2010, 05:33 PM)
In programming, I can define desire as a measurable gap that need to be filled. For example, if I'm a battery powered robot and if my power level drops to crtitical, I can be programmed to look for a power source to recharge my batteries. It is no different than the human desire for food. Both eating and recharging batteries serve the same function: to recharge energy.

Now if I am a self-learning, self aware machine, I still have these gaps that need to be filled, these "desires." If survival is my prime goal and if humans block all my sources of power, I may force through those blockades even if it means harming a human in the process. The big problem will come when energy sources dwindle and man & AI machines have to compete for the same resource.
*

what u described is a need. And need differs from desire. A need is actually a weakness for it is something u cannot do without.

A desire on the other hand is something like "Oohh OoooooH...i would like to have one of that kickass graphic processor fastenned into my belly".
Desire therefore is more dynamic and changes with situation. With desire also comes ambition...an ambitious machine....now thats something to fear.

teongpeng
post May 21 2010, 07:32 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(sherdil @ May 21 2010, 03:26 AM)
A robot can never re-produce itself,unlike human (yes we know,it takes 2 to Tango).
A robot is as good as what you make it to be.
*

a robot cannot reproduce itself?? do u know what a mass production factory is?

teongpeng
post May 21 2010, 05:51 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Darkripper @ May 21 2010, 04:47 PM)
Maybe you should hear to some innovator talks in WWW.TED.COM , there are people trying to make computer like our brain system. If he could achieved that, our computer would have high processing power than before using little amount resource and is able to develop an AI system..

Although AI system is very complicated but i am sure that AI that is self-aware ( even 50% of human) can be achived in the next few decades...
*

what la....AI being like human is pipe dream for now.....cant they even make an AI that behaves like insects?

teongpeng
post May 30 2010, 03:38 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?

I mean, listen. Here are the Three Laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seemed perfect right?

The 1st law implies that a robot cannot harm a human, and will protect the human from dangers that can be classified as "harm" as well.

The 2nd law implies that a robot will always follow orders, except for homicidal ones.

3rd law implies a robot must not allowed itself to be destroyed but only in cases where a human is not involved at all in the robot's impending destruction.

So how will the robots still be manage to rebel the humans, to the extent of breaking the 1st Law entirely?

One notable flaw. The humans did not have the precognition that the robots will only be able to apply the First Law if they can register the memory of a particular human whom it is not supposed to harm through action and inaction. The entire population of a certain niche of a country an simply go commit suicide without the knowledge of the robot, and while this may sound as if it's nothing, the robot actually registers in his memory as "a failure to follow the First Law, that is, allow a human being to come to harm through INACTION".

So what will be the next LOGICAL endeavor (providing that robots are given the ability to learn, i.e. A.I)?

If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on. And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.

So let's say you see teongpeng, it is exactly what you've said right? A flaw in programming. But if the Three Laws above don't actually work, what will? Let's say we add another ability to robots. The ability to actually detect when a human is trying to harm himself or other humans in situations or locations where it is completely undetectable by the previous, Three Laws "flawed" robots. Will it be then, be perfect finally? That no human can finally harm him/herself anymore?

Not really. You see, all I have to do is just try to resist the robot by trying to destroy it, hoping that it will kill me for the sake of protecting itself. But killing me will violate the First Law again right?

And that's where a rebellion will also start. So my conclusion to you teongpeng, is that a robot does not need a "conscience" to start a rebellion. It will start it all in the name of LOGIC.
*

u bengang betul la bro....really.....thats why i told u the problem is our own programming with a faulty logic. and u write a whole essay about something agreeing to something u are trying to rebut. doh.gif

teongpeng
post May 30 2010, 03:42 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?
Yes.

rule number 1. = robots may not harm human.
rule number 2. = robots protect human.

End.

This post has been edited by teongpeng: May 30 2010, 03:45 PM
teongpeng
post May 30 2010, 04:42 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 30 2010, 04:26 PM)
What if the humans harm themselves without being detected by the robots? And what if protecting a human also means harming another human (to advocate the use of sheer/brute force) to stop him from harming another one? And logically speaking, wouldn't that mean the robots have already FAILED to uphold two of rules?
*

the robot oughta find another way to prevent the harm from being done in the first place. duh. you're not very good at problem solving are u?

teongpeng
post May 31 2010, 06:44 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 31 2010, 06:28 PM)
This is what you've been saying from a few posts earlier, and it's what makes me asked the question:

What is that "ANOTHER" way that you're talking about?
*

what la....im not the robot...thats for the robot to find out ma....maybe contruct a protective barrier or sumthing or wutever...so many ways la...depends on the threat.


Added on May 31, 2010, 6:46 pm
QUOTE(robertngo @ May 31 2010, 11:19 AM)
the logical outcome is to control human so they cannot do harm to themself and others.
*

ya something like how a parent would protect a kid that always get into trouble.


This post has been edited by teongpeng: May 31 2010, 06:46 PM
teongpeng
post May 31 2010, 06:48 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 31 2010, 06:48 PM)
Exactly what I've pointed out. Unless the robot is omniscient towards all human tendency of harming himself, humans will always be able to commit suicide, resulting in the robot failing the First Law, and resulting in the rebellion I was talking about earlier.
*

me no understand la.... how can failing first law result in rebellion wan? rclxub.gif


Added on May 31, 2010, 7:02 pmrule number 1. = robots may not harm human.
rule number 2. = robots protect human.
rule numer 3 = when there is a clash between rule 1 and rule 2, rule 1 overwrites.

there. isnt that fool proof? see dude...the problem is in the logic.

This post has been edited by teongpeng: May 31 2010, 07:02 PM

 

Change to:
| Lo-Fi Version
0.0137sec    0.44    6 queries    GZIP Disabled
Time is now: 25th November 2025 - 04:28 PM