Welcome Guest ( Log In | Register )

6 Pages < 1 2 3 4 > » Bottom

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
robertngo
post May 19 2010, 09:10 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(cherroy @ May 19 2010, 12:53 AM)
So far whatever or how high self-awareness of AI, it cannot beat human brain.

Because the self-awareness of AI is built based upon information received then process the information, and react to the information received based on the pre-set, programme or whatever AI built in, aka no matter how high flexibility of the AI and self-awareness, it cannot beat human factor of creativity and flexibility. After all, it is the human brain create the AI.  biggrin.gif

Aka whatever AI is rigid based on programme and logarithms set, while human is not.
While human factor has creativity, can always have new constant input for self-improvement etc.
*
if the machine became truely self aware it will respond to information with it own judgement not preset program, the massive challenge to replicate the biological function of the brain on non biological component, that would include creativity, the machine will find it own solution to problem. just hope that the problem does not include rterminating all the pesky human laugh.gif

http://www.consciousness.it/CAI/CAI.htm

This post has been edited by robertngo: May 19 2010, 09:10 AM
nice.rider
post May 20 2010, 01:14 AM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(cherroy @ May 19 2010, 12:53 AM)
So far whatever or how high self-awareness of AI, it cannot beat human brain.

Because the self-awareness of AI is built based upon information received then process the information, and react to the information received based on the pre-set, programme or whatever AI built in, aka no matter how high flexibility of the AI and self-awareness, it cannot beat human factor of creativity and flexibility. After all, it is the human brain create the AI.  biggrin.gif

Aka whatever AI is rigid based on programme and logarithms set, while human is not.
While human factor has creativity, can always have new constant input for self-improvement etc.
*
Yup, I believe till today, no AI has actually pass the Turing test.

Talking about AI, there is one computer scientist who need to be mentioned, Alan Turing.

http://en.wikipedia.org/wiki/Turing_test
The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. It proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are placed in isolated locations. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen.[1]

The test was proposed by Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" Since "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[2] Turing's new question is: "Are there imaginable digital computers which would do well in the [Turing test]"?[3] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to this proposition.[4]

In the years since 1950, the test has proven to be both highly influential and widely criticized, and it is an essential concept in the philosophy of artificial intelligence.[5] To date we still do not have machines that can convincingly pass the test.[6]
teongpeng
post May 20 2010, 01:51 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


Unless desire is included in the AI, i see no fear of such 'takeovers' happening on its own accord.
SUSDeadlocks
post May 20 2010, 01:59 AM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 01:51 AM)
Unless desire is included in the AI, i see no fear of such 'takeovers' happening on its own accord.
*
Oh, but there is, according to the theory in the 2004 film: "I, Robot".

In order to protect humanity as a whole "some humans must be sacrificed and some freedoms must be surrendered", as "you charge us with your safekeeping, yet you wage wars and toxify your earth".

So the "takeover" is not motivated by a desires. It is by logic. You see, the Three Laws implies that robots are to ensure that no humans must be harmed through actions and inaction. But since inaction wouldn't work at all (since humans are capable of harming themselves nevertheless), they decided that the most logical solution is to take over the human dominion, and as to "benefit humanity as a whole", even if that logic implies that harm must be done, and lives must be taken.

Pure logic.

This post has been edited by Deadlocks: May 20 2010, 01:59 AM
teongpeng
post May 20 2010, 02:04 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 20 2010, 01:59 AM)
Oh, but there is, according to the theory in the 2004 film: "I, Robot".

In order to protect humanity as a whole "some humans must be sacrificed and some freedoms must be surrendered", as "you charge us with your safekeeping, yet you wage wars and toxify your earth".

So the "takeover" is not motivated by a desires. It is by logic. You see, the Three Laws implies that robots are to ensure that no humans must be harmed through actions and inaction. But since inaction wouldn't work at all (since humans are capable of harming themselves nevertheless), they decided that the most logical solution is to take over the human dominion, and as to "benefit humanity as a whole", even if that logic implies that harm must be done, and lives must be taken.

Pure logic.
*

i said 'by its own accord', god damn it. and that does not include our own short sightedness when inputting commands into the computer.

SUSDeadlocks
post May 20 2010, 02:06 AM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 02:04 AM)
i said 'by its own accord', god damn it.  and that does not include our own short sightedness when inputting commands into the computer.
*
And what if "by its own accord" is actually its own logic? Unless of course if the Three Laws are absolutely flawed.

This post has been edited by Deadlocks: May 20 2010, 02:06 AM
teongpeng
post May 20 2010, 02:13 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 20 2010, 02:06 AM)
And what if "by its own accord" is actually its own logic? Unless of course if the Three Laws are absolutely flawed.
*

its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
SUSmylife4nerzhul
post May 20 2010, 02:15 AM

Getting Started
**
Junior Member
270 posts

Joined: Apr 2009
No.
robertngo
post May 20 2010, 09:15 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(teongpeng @ May 20 2010, 02:13 AM)
its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
*
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do. icon_question.gif
user posted image


This post has been edited by robertngo: May 20 2010, 09:16 AM
arthurlwf
post May 20 2010, 03:34 PM

Look at all my stars!!
*******
Senior Member
2,546 posts

Joined: Jan 2003


QUOTE(Beastboy @ May 18 2010, 10:56 AM)
Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)
*
Maybe a new robotic wars... and human are just sheep stuck in between of two gigantic robotics (sounds like transformer.. LOL)
TSBeastboy
post May 20 2010, 04:52 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 20 2010, 09:15 AM)
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do.  icon_question.gif
user posted image
*
Would it be accurate to assume that artificially-spawned self awareness is going to be the same as human self-awareness?

Humans are carbon based. Computers are silicon based. If left in the wild, can we assume that a silicon-based brain will develop consciousness the same way as a carbon-based brain, and adopt the same priorities in its existence?


teongpeng
post May 20 2010, 04:57 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(robertngo @ May 20 2010, 09:15 AM)
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do.   icon_question.gif
i know..

a self learning computer can be dangerous. But unless they have desires, there is very little to fear about AI world domination. Well unless its programmed to do just that.

This post has been edited by teongpeng: May 20 2010, 04:58 PM
TSBeastboy
post May 20 2010, 05:33 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(teongpeng @ May 20 2010, 04:57 PM)
But unless they have desires, there is very little to fear about AI world domination. Well unless its programmed to do just that.
*
In programming, I can define desire as a measurable gap that need to be filled. For example, if I'm a battery powered robot and if my power level drops to crtitical, I can be programmed to look for a power source to recharge my batteries. It is no different than the human desire for food. Both eating and recharging batteries serve the same function: to recharge energy.

Now if I am a self-learning, self aware machine, I still have these gaps that need to be filled, these "desires." If survival is my prime goal and if humans block all my sources of power, I may force through those blockades even if it means harming a human in the process. The big problem will come when energy sources dwindle and man & AI machines have to compete for the same resource.

This post has been edited by Beastboy: May 20 2010, 05:36 PM
robertngo
post May 20 2010, 07:31 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 20 2010, 04:52 PM)
Would it be accurate to assume that artificially-spawned self awareness is going to be the same as human self-awareness?

Humans are carbon based. Computers are silicon based. If left in the wild, can we assume that a silicon-based brain will develop consciousness the same way as a carbon-based brain, and adopt the same priorities in its existence?
*
no one can know until it is developed. but scientist always model intelligent with human intelligent, if there was to be a day that machine reach sel aware i believe i will most likely be model on human brain function.
teongpeng
post May 20 2010, 10:01 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Beastboy @ May 20 2010, 05:33 PM)
In programming, I can define desire as a measurable gap that need to be filled. For example, if I'm a battery powered robot and if my power level drops to crtitical, I can be programmed to look for a power source to recharge my batteries. It is no different than the human desire for food. Both eating and recharging batteries serve the same function: to recharge energy.

Now if I am a self-learning, self aware machine, I still have these gaps that need to be filled, these "desires." If survival is my prime goal and if humans block all my sources of power, I may force through those blockades even if it means harming a human in the process. The big problem will come when energy sources dwindle and man & AI machines have to compete for the same resource.
*

what u described is a need. And need differs from desire. A need is actually a weakness for it is something u cannot do without.

A desire on the other hand is something like "Oohh OoooooH...i would like to have one of that kickass graphic processor fastenned into my belly".
Desire therefore is more dynamic and changes with situation. With desire also comes ambition...an ambitious machine....now thats something to fear.

Darkripper
post May 20 2010, 10:16 PM

What do you expect?
******
Senior Member
1,258 posts

Joined: Dec 2008
From: /k/
even if it happen... what about we launch EMP on them?
robertngo
post May 20 2010, 10:36 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Darkripper @ May 20 2010, 10:16 PM)
even if it happen... what about we launch EMP on them?
*
electronic can be shielded from EMP, military hardware are built to spec that have protection from EMP, if the robot in this case are made by the military it is most likely already been harden again EMP.
VMSmith
post May 21 2010, 02:30 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Battlestar Galactica)
The Cylons were created by Man.

They rebelled.

They evolved.

There are many copies.

And they have a plan.
Bring on the hot cylon babes. Just bring it. smile.gif
Darkripper
post May 21 2010, 03:03 AM

What do you expect?
******
Senior Member
1,258 posts

Joined: Dec 2008
From: /k/
QUOTE(robertngo @ May 20 2010, 10:36 PM)
electronic can be shielded from EMP, military hardware are built to spec that have protection from EMP, if the robot in this case are made by the military it is most likely already been harden again EMP.
*
I thought EMP can break any disable most electric circuit for a period of time... btw, this mean emp nowadays useless?
Frostlord
post May 21 2010, 03:12 AM

Regular
******
Senior Member
1,723 posts

Joined: Jun 2007


@TS, i think your question is a bit too broad as there is no dateline

in 100000 years to come, anything could happen, unless you are suggesting that humans destroy ourselves (or alien invasion) before we are destroyed by AI

well, for AI destruction, it is quite a long way to go. as we can see from our current tech, we have nothing that is even 10% terminator (Asimo is like 0.000001% terminator biggrin.gif)
for alien invasion, it could happen anytime. heck, it could even happen tomorrow. This is because there is no measurement that aliens will not reach our planet soon (a few decades?) but we can be sure that in a few decades, terminator will not exists yet

6 Pages < 1 2 3 4 > » Top
 

Change to:
| Lo-Fi Version
0.0228sec    0.51    5 queries    GZIP Disabled
Time is now: 26th November 2025 - 08:56 AM