Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
TSBeastboy
post May 18 2010, 10:56 AM, updated 16y ago

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)


robertngo
post May 18 2010, 11:16 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 10:56 AM)
Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)
*
even it is to happen it would be decades away from AI to be smarter that human. we are still one or two decades away from building a supercomputer that to completely simulate the human brain. even if the robot become self aware, it is still not likely they will all band together and able to gain control off all the computer system.

i think a more likely robot doomsday are self replicate nanobot got out of control and consume all matter on earth. the grey goo scenario.

SUSslimey
post May 18 2010, 11:20 AM


*******
Senior Member
6,914 posts

Joined: Apr 2007
eventually there will be a point where AI is closely equal to human's....
by that time what would human do?
human can still increase the mental power....
or human can join them and be cyborg yeah...
or human can try to make sure that it does not happen....
noobfc
post May 18 2010, 11:24 AM

Peanuts
*****
Senior Member
753 posts

Joined: Jan 2008



it could happen, depends on how fast we can develop the tech for advance ai

but i think ppl will implement fail safe to prevent this from occuring
devil_x
post May 18 2010, 11:29 AM

Casual
***
Junior Member
483 posts

Joined: Jan 2003
From: some where..some place

i think, b4 we get a robotic doomsday, we might get a environmental doomsday. worst of all, we might get a bioweapon doomsday b4 we even need to worry about robotic/AI doomsday. compare to robotic controlling humans, im more concern of bioweapon that are capable of killing targeted humans with specific DNA or specific biological attributes, or a biological weapon that can completely disintegrate a human body leaving no trace or evidence of its existence.

for terminator/matrix style doomsday, its due to human arrogance and ignorant. humans are the architect of their own demise, "the 2nd Renaissance Part 1, Animatrix". in every doomsday scenario, MAN is always the reason behind it. sadly speaking, im not too optimistic that MAN will avoid any of the doomsday they can come out with.
robertngo
post May 18 2010, 11:31 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(slimey @ May 18 2010, 11:20 AM)
eventually there will be a point where AI is closely equal to human's....
by that time what would human do?
human can still increase the mental power....
or human can join them and be cyborg yeah...
or human can try to make sure that it does not happen....
*
some futurist believe that the next step of human evolution is to merge our brain with computer chip that enhance our mental capability.
noobfc
post May 18 2010, 11:33 AM

Peanuts
*****
Senior Member
753 posts

Joined: Jan 2008



QUOTE(robertngo @ May 18 2010, 11:31 AM)
some futurist believe that the next step of human evolution is to merge our brain with computer chip that enhance our mental capability.
*
then there's alot of problems to overcome....remind me of ghost in the shell XD
dreamer101
post May 18 2010, 11:36 AM

10k Club
Group Icon
Elite
15,855 posts

Joined: Jan 2003
QUOTE(Beastboy @ May 18 2010, 10:56 AM)
Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)
*
Beastboy,

WHY does it matters??

Most human beings CHOOSE to live like robot anyhow.. They CHOOSE not to control their lives. Hence, why does it matters that it is control by ROBOT or whatever??

Most human beings do not live. They ONLY exists.

Dreamer
communist892003
post May 18 2010, 12:11 PM

On my way
****
Senior Member
550 posts

Joined: Dec 2008


QUOTE(dreamer101 @ May 18 2010, 12:36 PM)
Beastboy,

WHY does it matters??

Most human beings CHOOSE to live like robot anyhow.. They CHOOSE not to control their lives.  Hence, why does it matters that it is control by ROBOT or whatever??

Most human beings do not live.  They ONLY exists.

Dreamer
*
Guess what, agree rclxms.gif
azerroes
post May 18 2010, 01:44 PM

No sorcery lies beyond my grasp
******
Senior Member
1,105 posts

Joined: Sep 2009


i wonder if our world will vanished before we can achieve our hi-tech imagination
TSBeastboy
post May 18 2010, 02:44 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 11:16 AM)
even it is to happen it would be decades away from AI to be smarter that human. we are still one or two decades away from building a supercomputer that to completely simulate the human brain.
*
IMO, a machine does not need to simulate the human brain or be smarter than it to control humans. It just needs to be smart enough to take over missile launching systems, trip the power stations and shut off water supply.

Its very easy to deny humans control over such facilities. Just embed some code that changes people's passwords. When they're locked out, the machine will be self-operating until it runs out of juice... which can be decades if it draws power from nuclear sources.

A few months ago the US issued an internal security alert after discovering how easily their infrastructure can be crippled because of bad computer code... whether done intentionally or not. The scenario is easy to imagine from a programmer's point of view.

robertngo
post May 18 2010, 03:53 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 02:44 PM)
IMO, a machine does not need to simulate the human brain or be smarter than it to control humans. It just needs to be smart enough to take over missile launching systems, trip the power stations and shut off water supply.

Its very easy to deny humans control over such facilities. Just embed some code that changes people's passwords. When they're locked out, the machine will be self-operating until it runs out of juice... which can be decades if it draws power from nuclear sources.

A few months ago the US issued an internal security alert after discovering how easily their infrastructure can be crippled because of bad computer code... whether done intentionally or not. The scenario is easy to imagine from a programmer's point of view.
*
the critical site like missile silo, power station, water supply, are not networked together and not connected to the internet. all the site use hard to use industrial control software that are no compatible with each other. it is possible for hacker of really advanced AI in the future to gain access to one facility at an time, but to gain access to all of them at the same time like the scenario in die hard 4, you would need to be physically onsite to take control of these control system, maybe the hacker can get lucky with some site that dont have proper network security where a pc connected to the internal network also connected to the internet, they can use this PC to gain access. if there is no access no matter how advance the AI it will not be able to hack the system.

faceless
post May 18 2010, 04:18 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
QUOTE(azerroes @ May 18 2010, 01:44 PM)
i wonder if our world will vanished before we can achieve our hi-tech imagination
*
Some people just love use imagination to take science further. It is call progress through dreams. Boys playing alien or astronaut thingy, you know. Its fun. We are still not through thinking of teleportation or the little hand held lazer gun. Ooops thanks for bringing us back to reality, Aerroes. If mother earth could sustain that long.
TSBeastboy
post May 18 2010, 04:38 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 03:53 PM)
the critical site like missile silo, power station, water supply, are not networked together and not connected to the internet. all the site use hard to use industrial control software that are no compatible with each other. it is possible for hacker of really advanced AI in the future to gain access to one facility at an time, but to gain access to all of them at the same time like the scenario in die hard 4, you would need to be physically onsite to take control of these control system, maybe the hacker can get lucky with some site that dont have proper network security where a pc connected to the internal network also connected to the internet, they can use this PC to gain access. if there is no access no matter how advance the AI it will not be able to hack the system.
*
Actually public utilties do use the internet... using VPN to send telemetry, billing information and so on, especially when their plants are distributed. They use a public network like the internet for cost reason (the cost to lay your own fiber station to station, branch to branch is crazy) & the belief is that VPN technology is secure enough to stay private. But security and encryption is a never ending battle.

Weapon systems... most mobile launch systems, ships & submarines use encrypted RF or microwave. Again, the moment you broadcast a signal in the open, you open a window to intrusion. The security strength is not much different than VPN... somtimes its only as good, or as bad, as the password the operator use. Give a password hammer software enuf time and they can break in.

Proprietary systems as a wall, yes this can work but we must remember, these systems are rarely islands. To shut down a proprietary bank system, you don't need to shut down their computer. You shut down the power station that supplies power to the computer. Unless the station is operated by the bank, it falls under public domain and vulnerable to the usual public risks. You don't even need AI to break in.

Power stations are the most vulnerable because one small failure can lead to a nationwide cascade failure, like what Malaysia suffered in 1996.

This hidden interconnectedness between public and private domains is probably what caused the US DOD to issue their warning.


robertngo
post May 18 2010, 05:05 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 04:38 PM)
Actually public utilties do use the internet... using VPN to send telemetry, billing information and so on, especially when their plants are distributed. They use a public network like the internet for cost reason (the cost to lay your own fiber station to station, branch to branch is crazy) & the belief is that VPN technology is secure enough to stay private. But security and encryption is a never ending battle.

Weapon systems... most mobile launch systems, ships & submarines use encrypted RF or microwave. Again, the moment you broadcast a signal in the open, you open a window to intrusion. The security strength is not much different than VPN... somtimes its only as good, or as bad, as the password the operator use. Give a password hammer software enuf time and they can break in.

Proprietary systems as a wall, yes this can work but we must remember, these systems are rarely islands. To shut down a proprietary bank system, you don't need to shut down their computer. You shut down the power station that supplies power to the computer. Unless the station is operated by the bank, it falls under public domain and vulnerable to the usual public risks. You don't even need AI to break in.

Power stations are the most vulnerable because one small failure can lead to a nationwide cascade failure, like what Malaysia suffered in 1996.

This hidden interconnectedness between public and private domains is probably what caused the US DOD to issue their warning.
*
the biling information are not connected to the control system that are running the plant and telemetry to the outside world should be just an one way transfer.

as for missile launch there will network to the silo or remote launch unit but the launch system are much more securely built than other system. security study believe that hacker will need to trick the personal in charge to launch the nuke by sending false info to the monitoring system so the person in charge will launch the nuke in panic. very unlikely they can take control of the launch system itself

financial institution all have regulation that require them to have disaster recovery site, bank negara require that system be able to switch to DR in a few minutes time and every year they run drill to confirm the DR procedure is working.

the biggest risk to any organization are people, even if the person does not mean to do harm to the system. he could just be an bored operator who connect his pc to the internet, this will get hacker the weak link to break in.

This post has been edited by robertngo: May 18 2010, 05:08 PM
TSBeastboy
post May 18 2010, 05:40 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 05:05 PM)
the biggest risk to any organization are people, even if the person does not mean to do harm to the system. he could just be an bored operator who connect his pc to the internet, this will get hacker the weak link to break in.
*
Yes I totally agree with you on that one becoz there are still people who keep their PIN number together with the ATM card in ther wallet. laugh.gif

Ok lets say I agree that power stations, banks, and military applications today are all hacker proof. Let me get back to the the main point of my question which is, should the developers of AI, robotics and software be subject to strict ethical rules about what they can and shouldn't develop? to prevent a terminator-style scenario from ever happening.

If it sounds far fetched, think of the restrictions they're already putting on cloning. Maybe people are afraid it might lead to Frankenstein so some countries actually impose restrictions on that science. If they can do that, wouldn't they eventually do the same to AI and robotics development too? And most importantly, do you think such restrictions would be justifiable?


robertngo
post May 18 2010, 06:56 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 05:40 PM)
Yes I totally agree with you on that one becoz there are still people who keep their PIN number together with the ATM card in ther wallet.  laugh.gif

Ok lets say I agree that power stations, banks, and military applications today are all hacker proof. Let me get back to the the main point of my question which is, should the developers of AI, robotics and software be subject to strict ethical rules about what they can and shouldn't develop? to prevent a terminator-style scenario from ever happening.

If it sounds far fetched, think of the restrictions they're already putting on cloning. Maybe people are afraid it might lead to Frankenstein so some countries actually impose restrictions on that science. If they can do that, wouldn't they eventually do the same to AI and robotics development too? And most importantly, do you think such restrictions would be justifiable?
*
it is not that the system is hacker proof it is just impossible for someone of an machine to have access to all of them or a large number of the system to destroy the world. it is hard enough to hack into just one.

i think if the computer in the future are advance enough to reach self awareness, there need to be a bill of right for the machine, if not they might get really piss off and start a war with human sweat.gif

there are not current machine in service that can decide to fire weapon on its own, there is always a person remotely controling it, giving autonomy to robot in battle are still a subject of debate, the technology is also still decades away from being battle ready, the worst thing you can have is the robot indentify your own troop as target and wipe them out

http://news.bbc.co.uk/2/hi/technology/8182003.stm
TSBeastboy
post May 18 2010, 08:38 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


The doomsday scenario can happen without guns or missiles. A war can be triggered by sabotaging the economy and public infrastructure. The intruder doesn't need simultaneous access to every computer to do this either. It just needs access to one machine, one weak point that can trip other systems and force exception routines to cascade the attack. Human chaos will do the rest.

On whether machines will eventually reach self awareness, I'd be interested to see how they define "self aware", see whether a home alarm using motion sensors can be classified as self aware. The question still stands though... should developers be allowed to go all the way and build intelligent systems without any ethical controls?

Thanks for posting the bbc link. Interesting article that reads like the beginnings of Skynet. biggrin.gif


robertngo
post May 18 2010, 11:28 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 08:38 PM)
The doomsday scenario can happen without guns or missiles. A war can be triggered by sabotaging the economy and public infrastructure. The intruder doesn't need simultaneous access to every computer to do this either. It just needs access to one machine, one weak point that can trip other systems and force exception routines to cascade the attack. Human chaos will do the rest.

On whether machines will eventually reach self awareness, I'd be interested to see how they define "self aware", see whether a home alarm using motion sensors can be classified as self aware. The question still stands though... should developers be allowed to go all the way and build intelligent systems without any ethical controls?

Thanks for posting the bbc link. Interesting article that reads like the beginnings of Skynet.  biggrin.gif
*
if and when machine reach self awareness it will be like another person capable of individual though, talking to it will let you convince you are talking to a real person, and it by all mean and purpose is a real person. it will learn ethic not by having it code into memory.


of course for now the machine we made to be semi autonomous will need to program in fail safe routine and manual override, i dont think they will put semi autonomous robot with weapon in to service any time soon, unless semi autonomous have been proven to work in other support role like logistic. success of robot like big dog will pave the way to the future, i for one welcome our robot overlord. laugh.gif

http://www.bostondynamics.com/robot_bigdog.html

user posted image

This post has been edited by robertngo: May 18 2010, 11:29 PM
cherroy
post May 19 2010, 12:53 AM

20k VIP Club
Group Icon
Staff
25,802 posts

Joined: Jan 2003
From: Penang


So far whatever or how high self-awareness of AI, it cannot beat human brain.

Because the self-awareness of AI is built based upon information received then process the information, and react to the information received based on the pre-set, programme or whatever AI built in, aka no matter how high flexibility of the AI and self-awareness, it cannot beat human factor of creativity and flexibility. After all, it is the human brain create the AI. biggrin.gif

Aka whatever AI is rigid based on programme and logarithms set, while human is not.
While human factor has creativity, can always have new constant input for self-improvement etc.

robertngo
post May 19 2010, 09:10 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(cherroy @ May 19 2010, 12:53 AM)
So far whatever or how high self-awareness of AI, it cannot beat human brain.

Because the self-awareness of AI is built based upon information received then process the information, and react to the information received based on the pre-set, programme or whatever AI built in, aka no matter how high flexibility of the AI and self-awareness, it cannot beat human factor of creativity and flexibility. After all, it is the human brain create the AI.  biggrin.gif

Aka whatever AI is rigid based on programme and logarithms set, while human is not.
While human factor has creativity, can always have new constant input for self-improvement etc.
*
if the machine became truely self aware it will respond to information with it own judgement not preset program, the massive challenge to replicate the biological function of the brain on non biological component, that would include creativity, the machine will find it own solution to problem. just hope that the problem does not include rterminating all the pesky human laugh.gif

http://www.consciousness.it/CAI/CAI.htm

This post has been edited by robertngo: May 19 2010, 09:10 AM
nice.rider
post May 20 2010, 01:14 AM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(cherroy @ May 19 2010, 12:53 AM)
So far whatever or how high self-awareness of AI, it cannot beat human brain.

Because the self-awareness of AI is built based upon information received then process the information, and react to the information received based on the pre-set, programme or whatever AI built in, aka no matter how high flexibility of the AI and self-awareness, it cannot beat human factor of creativity and flexibility. After all, it is the human brain create the AI.  biggrin.gif

Aka whatever AI is rigid based on programme and logarithms set, while human is not.
While human factor has creativity, can always have new constant input for self-improvement etc.
*
Yup, I believe till today, no AI has actually pass the Turing test.

Talking about AI, there is one computer scientist who need to be mentioned, Alan Turing.

http://en.wikipedia.org/wiki/Turing_test
The Turing test is a proposal for a test of a machine's ability to demonstrate intelligence. It proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which tries to appear human. All participants are placed in isolated locations. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. In order to test the machine's intelligence rather than its ability to render words into audio, the conversation is limited to a text-only channel such as a computer keyboard and screen.[1]

The test was proposed by Alan Turing in his 1950 paper Computing Machinery and Intelligence, which opens with the words: "I propose to consider the question, 'Can machines think?'" Since "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[2] Turing's new question is: "Are there imaginable digital computers which would do well in the [Turing test]"?[3] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to this proposition.[4]

In the years since 1950, the test has proven to be both highly influential and widely criticized, and it is an essential concept in the philosophy of artificial intelligence.[5] To date we still do not have machines that can convincingly pass the test.[6]
teongpeng
post May 20 2010, 01:51 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


Unless desire is included in the AI, i see no fear of such 'takeovers' happening on its own accord.
SUSDeadlocks
post May 20 2010, 01:59 AM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 01:51 AM)
Unless desire is included in the AI, i see no fear of such 'takeovers' happening on its own accord.
*
Oh, but there is, according to the theory in the 2004 film: "I, Robot".

In order to protect humanity as a whole "some humans must be sacrificed and some freedoms must be surrendered", as "you charge us with your safekeeping, yet you wage wars and toxify your earth".

So the "takeover" is not motivated by a desires. It is by logic. You see, the Three Laws implies that robots are to ensure that no humans must be harmed through actions and inaction. But since inaction wouldn't work at all (since humans are capable of harming themselves nevertheless), they decided that the most logical solution is to take over the human dominion, and as to "benefit humanity as a whole", even if that logic implies that harm must be done, and lives must be taken.

Pure logic.

This post has been edited by Deadlocks: May 20 2010, 01:59 AM
teongpeng
post May 20 2010, 02:04 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 20 2010, 01:59 AM)
Oh, but there is, according to the theory in the 2004 film: "I, Robot".

In order to protect humanity as a whole "some humans must be sacrificed and some freedoms must be surrendered", as "you charge us with your safekeeping, yet you wage wars and toxify your earth".

So the "takeover" is not motivated by a desires. It is by logic. You see, the Three Laws implies that robots are to ensure that no humans must be harmed through actions and inaction. But since inaction wouldn't work at all (since humans are capable of harming themselves nevertheless), they decided that the most logical solution is to take over the human dominion, and as to "benefit humanity as a whole", even if that logic implies that harm must be done, and lives must be taken.

Pure logic.
*

i said 'by its own accord', god damn it. and that does not include our own short sightedness when inputting commands into the computer.

SUSDeadlocks
post May 20 2010, 02:06 AM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 02:04 AM)
i said 'by its own accord', god damn it.  and that does not include our own short sightedness when inputting commands into the computer.
*
And what if "by its own accord" is actually its own logic? Unless of course if the Three Laws are absolutely flawed.

This post has been edited by Deadlocks: May 20 2010, 02:06 AM
teongpeng
post May 20 2010, 02:13 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 20 2010, 02:06 AM)
And what if "by its own accord" is actually its own logic? Unless of course if the Three Laws are absolutely flawed.
*

its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
SUSmylife4nerzhul
post May 20 2010, 02:15 AM

Getting Started
**
Junior Member
270 posts

Joined: Apr 2009
No.
robertngo
post May 20 2010, 09:15 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(teongpeng @ May 20 2010, 02:13 AM)
its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
*
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do. icon_question.gif
user posted image


This post has been edited by robertngo: May 20 2010, 09:16 AM
arthurlwf
post May 20 2010, 03:34 PM

Look at all my stars!!
*******
Senior Member
2,546 posts

Joined: Jan 2003


QUOTE(Beastboy @ May 18 2010, 10:56 AM)
Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)
*
Maybe a new robotic wars... and human are just sheep stuck in between of two gigantic robotics (sounds like transformer.. LOL)
TSBeastboy
post May 20 2010, 04:52 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 20 2010, 09:15 AM)
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do.  icon_question.gif
user posted image
*
Would it be accurate to assume that artificially-spawned self awareness is going to be the same as human self-awareness?

Humans are carbon based. Computers are silicon based. If left in the wild, can we assume that a silicon-based brain will develop consciousness the same way as a carbon-based brain, and adopt the same priorities in its existence?


teongpeng
post May 20 2010, 04:57 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(robertngo @ May 20 2010, 09:15 AM)
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do.   icon_question.gif
i know..

a self learning computer can be dangerous. But unless they have desires, there is very little to fear about AI world domination. Well unless its programmed to do just that.

This post has been edited by teongpeng: May 20 2010, 04:58 PM
TSBeastboy
post May 20 2010, 05:33 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(teongpeng @ May 20 2010, 04:57 PM)
But unless they have desires, there is very little to fear about AI world domination. Well unless its programmed to do just that.
*
In programming, I can define desire as a measurable gap that need to be filled. For example, if I'm a battery powered robot and if my power level drops to crtitical, I can be programmed to look for a power source to recharge my batteries. It is no different than the human desire for food. Both eating and recharging batteries serve the same function: to recharge energy.

Now if I am a self-learning, self aware machine, I still have these gaps that need to be filled, these "desires." If survival is my prime goal and if humans block all my sources of power, I may force through those blockades even if it means harming a human in the process. The big problem will come when energy sources dwindle and man & AI machines have to compete for the same resource.

This post has been edited by Beastboy: May 20 2010, 05:36 PM
robertngo
post May 20 2010, 07:31 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 20 2010, 04:52 PM)
Would it be accurate to assume that artificially-spawned self awareness is going to be the same as human self-awareness?

Humans are carbon based. Computers are silicon based. If left in the wild, can we assume that a silicon-based brain will develop consciousness the same way as a carbon-based brain, and adopt the same priorities in its existence?
*
no one can know until it is developed. but scientist always model intelligent with human intelligent, if there was to be a day that machine reach sel aware i believe i will most likely be model on human brain function.
teongpeng
post May 20 2010, 10:01 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Beastboy @ May 20 2010, 05:33 PM)
In programming, I can define desire as a measurable gap that need to be filled. For example, if I'm a battery powered robot and if my power level drops to crtitical, I can be programmed to look for a power source to recharge my batteries. It is no different than the human desire for food. Both eating and recharging batteries serve the same function: to recharge energy.

Now if I am a self-learning, self aware machine, I still have these gaps that need to be filled, these "desires." If survival is my prime goal and if humans block all my sources of power, I may force through those blockades even if it means harming a human in the process. The big problem will come when energy sources dwindle and man & AI machines have to compete for the same resource.
*

what u described is a need. And need differs from desire. A need is actually a weakness for it is something u cannot do without.

A desire on the other hand is something like "Oohh OoooooH...i would like to have one of that kickass graphic processor fastenned into my belly".
Desire therefore is more dynamic and changes with situation. With desire also comes ambition...an ambitious machine....now thats something to fear.

Darkripper
post May 20 2010, 10:16 PM

What do you expect?
******
Senior Member
1,258 posts

Joined: Dec 2008
From: /k/
even if it happen... what about we launch EMP on them?
robertngo
post May 20 2010, 10:36 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Darkripper @ May 20 2010, 10:16 PM)
even if it happen... what about we launch EMP on them?
*
electronic can be shielded from EMP, military hardware are built to spec that have protection from EMP, if the robot in this case are made by the military it is most likely already been harden again EMP.
VMSmith
post May 21 2010, 02:30 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Battlestar Galactica)
The Cylons were created by Man.

They rebelled.

They evolved.

There are many copies.

And they have a plan.
Bring on the hot cylon babes. Just bring it. smile.gif
Darkripper
post May 21 2010, 03:03 AM

What do you expect?
******
Senior Member
1,258 posts

Joined: Dec 2008
From: /k/
QUOTE(robertngo @ May 20 2010, 10:36 PM)
electronic can be shielded from EMP, military hardware are built to spec that have protection from EMP, if the robot in this case are made by the military it is most likely already been harden again EMP.
*
I thought EMP can break any disable most electric circuit for a period of time... btw, this mean emp nowadays useless?
Frostlord
post May 21 2010, 03:12 AM

Regular
******
Senior Member
1,723 posts

Joined: Jun 2007


@TS, i think your question is a bit too broad as there is no dateline

in 100000 years to come, anything could happen, unless you are suggesting that humans destroy ourselves (or alien invasion) before we are destroyed by AI

well, for AI destruction, it is quite a long way to go. as we can see from our current tech, we have nothing that is even 10% terminator (Asimo is like 0.000001% terminator biggrin.gif)
for alien invasion, it could happen anytime. heck, it could even happen tomorrow. This is because there is no measurement that aliens will not reach our planet soon (a few decades?) but we can be sure that in a few decades, terminator will not exists yet
Darkripper
post May 21 2010, 03:19 AM

What do you expect?
******
Senior Member
1,258 posts

Joined: Dec 2008
From: /k/
QUOTE(Frostlord @ May 21 2010, 03:12 AM)
@TS, i think your question is a bit too broad as there is no dateline

in 100000 years to come, anything could happen, unless you are suggesting that humans destroy ourselves (or alien invasion) before we are destroyed by AI

well, for AI destruction, it is quite a long way to go. as we can see from our current tech, we have nothing that is even 10% terminator (Asimo is like 0.000001% terminator biggrin.gif)
for alien invasion, it could happen anytime. heck, it could even happen tomorrow. This is because there is no measurement that aliens will not reach our planet soon (a few decades?) but we can be sure that in a few decades, terminator will not exists yet
*
i knw its sarcasm, but still progression in robots nwadays is very fast , maybe it will be twice and thrice faster, we dont need AI that is like in terminator, but what if a person ( lets assume he is mad scientist) manage to create robot that is programmed to kill anyone he order? wouldn't it be like terminator? still there is a long way to come =D

Btw, i think there is no need to worry much about robot, for us human to survive the next 100 years is a big problem. Just look at how bad planet earth is destroyed by ourselves....
sherdil
post May 21 2010, 03:26 AM

Knowledge the key to Heaven
******
Senior Member
1,571 posts

Joined: Jan 2003
From: Somewhere between Past and Present.
It will never happen, i didnt say there is a possibility that it may happen.
It come back to a question about religion again, is there God?.

If there is then it will never happen, if there isnt then there is a possibility of it happening. Depends on which way you look at things.

A robot can never re-produce itself,unlike human (yes we know,it takes 2 to Tango).
A robot is as good as what you make it to be.
Frostlord
post May 21 2010, 04:07 AM

Regular
******
Senior Member
1,723 posts

Joined: Jun 2007


QUOTE(Darkripper @ May 21 2010, 03:19 AM)
i knw its sarcasm, but still progression in robots nwadays is very fast , maybe it will be twice and thrice faster, we dont need AI that is like in terminator, but what if a person ( lets assume he is mad scientist) manage to create robot that is programmed to kill anyone he order?  wouldn't it be like terminator?   still there is a long way to come =D

Btw, i think there is no need to worry much about robot, for us human to survive the next 100 years is a big problem. Just look at how bad planet earth is destroyed by ourselves....
*
yeap agreed with the last statement.. im kinda pissed with the "green earth" campaign. they are clearly doing it for money. now, lets not look at earth. imagine earth is a person, a human. when he has flu, fever, nobody give a damn. the it worsen to high fever, still no one care. now its cancer last stage, only ppl start to care for him. isnt it a little too late? u might as well enjoy the last moment of earth before its all over

now, back to the main topic. if a mad scientist (MS) manage to create a robot as u said, without proper AI, he wont be able to kill effectively. for example, the robot is in point A and his target is in point B. without excellent AI, he will proceed from A to B in a straight line method (i believe MS programmed it to use the fastest way to kill his target) this makes it very vulnerable and easily destroyed.

lets look at terminator. why is it so hard to kill? because its intelligent? hardly so. they just rush to their target. the main problem with terminator is its "indestructible body". lets not look at mercury man, thats a bit too far in the future for us to worry about. lets look at arnold robot. normal rifle couldnt stop it, nor can a shotgun. heck, even when its head is plummeted (in terminator 2) by whatever that thing is, it still manage to reboot. this means that its core (CPU) is still undamaged. for those who watch terminator 2, u know that the mercury man aint gentle with arnold. that is no light punishment.

now, lets get back to real world. where on earth are we going to find such hard material? not to mention that I did not saw arnold recharging himself even once. so it means that it is capable of recharging itself. question here is how? solar panel? radio wave? highly unlikely.


Added on May 21, 2010, 4:14 am
QUOTE(sherdil @ May 21 2010, 03:26 AM)
It will never happen, i didnt say there is a possibility that it may happen.
It come back to a question about religion again, is there God?.

If there is then it will never happen, if there isnt then there is a possibility of it happening. Depends on which way you look at things.

A robot can never re-produce itself,unlike human (yes we know,it takes 2 to Tango).
A robot is as good as what you make it to be.
*
given an infinite amount of time, with the assumption that humans race are not wiped out before it happens, there is a 99.999999999999999% it will happen.

IMHO, i think u view us humans a little too highly. we are not that great of a creature.
are we the smartest? no
are we the fastest? no
are we the strongest? no
are we the one with the sharpest eyesight? hearing? smell? touch? taste? no
are our communication the best? no

so why are we the top of the food chain? simple. we are just plain jack-of-all-trade


Added on May 21, 2010, 4:19 am
QUOTE(robertngo @ May 19 2010, 09:10 AM)
if the machine became truely self aware it will respond to information with it own judgement not preset program, the massive challenge to replicate the biological function of the brain on non biological component, that would include creativity, the machine will find it own solution to problem. just hope that the problem does not include rterminating all the pesky human  laugh.gif

http://www.consciousness.it/CAI/CAI.htm
*
unfortunately, the world biggest problem is human

why is our ozone layer almost gone? human
why is there war? human
why is the nature being polluted? human
why is the natural resources being drained like its nobody business? human
why is the world climate changes so drastically? human
...
...
...


in short, the root of all problem is humans. therefore, only be eliminating us can the AI truly solved this world problems. i might sound pessimist but thats the truth

This post has been edited by Frostlord: May 21 2010, 04:19 AM
VMSmith
post May 21 2010, 04:31 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Darkripper @ May 21 2010, 03:19 AM)
i knw its sarcasm, but still progression in robots nwadays is very fast , maybe it will be twice and thrice faster, we dont need AI that is like in terminator, but what if a person ( lets assume he is mad scientist) manage to create robot that is programmed to kill anyone he order?  wouldn't it be like terminator?  still there is a long way to come =D
I wish I could be that optimistic. The US Army has already deployed remote-controlled drones to blow up terrorists/civilians in Iraq and Afghanistan. It'll probably be just a matter of decades until the remote control is made redundant.
Darkripper
post May 21 2010, 04:34 AM

What do you expect?
******
Senior Member
1,258 posts

Joined: Dec 2008
From: /k/
» Click to show Spoiler - click again to hide... «


Yap, he still do need a smart AI to kill effectively. but it doesnt need the AI like skynet.

Robot can replicate themselve, if the AI is linked to the manufacture factory. Its possible.

Still, i hope one day i wake up, the whole city is green, with much nature and not smoke....

VMSmith
post May 21 2010, 04:45 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Frostlord)
so why are we the top of the food chain? simple. we are just plain jack-of-all-trade.
Actually, I always thought we got to the top because we clubbed, speared, chainsaw-ed, and drilled our way up there.

Man's ability to abuse and mistreat Nature and Himself makes Him what He is today.
teongpeng
post May 21 2010, 07:32 AM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(sherdil @ May 21 2010, 03:26 AM)
A robot can never re-produce itself,unlike human (yes we know,it takes 2 to Tango).
A robot is as good as what you make it to be.
*

a robot cannot reproduce itself?? do u know what a mass production factory is?

robertngo
post May 21 2010, 09:09 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(VMSmith @ May 21 2010, 04:31 AM)
I wish I could be that optimistic. The US Army has already deployed remote-controlled drones to blow up terrorists/civilians in Iraq and Afghanistan. It'll probably be just a matter of decades until the remote control is made redundant.
*
even if there autonomous robot is developed and proven effective, i dont think the army would be letting it to decide on kill target on it own, there will always still be remote control by a operator.


Added on May 21, 2010, 9:20 am
QUOTE(Darkripper @ May 21 2010, 03:03 AM)
I thought EMP can break any disable most electric circuit for a period of time... btw, this mean emp nowadays useless?
*
it will disable the civilian electronic, but military hardware are not likely to be harm if already harden. the US military EMP protection have drop in the years since the cold war. the effect of EMP is well know and equipment can be easily repaired if there are spare part available.

This post has been edited by robertngo: May 21 2010, 09:20 AM
TSBeastboy
post May 21 2010, 09:21 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(VMSmith @ May 21 2010, 04:45 AM)
Actually, I always thought we got to the top because we clubbed, speared, chainsaw-ed, and drilled our way up there.

Man's ability to abuse and mistreat Nature and Himself makes Him what He is today.
*
Himself rather than himself? Ooo it makes humans sound almost divine, lol. biggrin.gif

A bit off track here but I'll go with Agent Smith of the Matrix when he said humans are like viruses. We land on a spot, sapu everything until there's nothing left, then we move on to the next spot. We are like a disease. So if the runaway AI develops into the tree-hugging sort, we'll start looking like a disease fit to be wiped out.

But again, I go back to my original question that's still unanswered. While AI and robotics development is at its infancy, would you support the ethical limitations on the science the way they did for cloning?

QUOTE(teongpeng @ May 20 2010, 10:01 PM)
what u described is a need. And need differs from desire. A need is actually a weakness for it is something u cannot do without.

A desire on the other hand is something like  "Oohh OoooooH...i would like to have one of that kickass graphic processor fastenned into my belly".
Desire therefore is more dynamic and changes with situation. With desire also comes ambition...an ambitious machine....now thats something to fear.
*
Ok I get u, the difference between wants and needs. I think wants will appear when organisms are pushed to compete for limited resources, the way men flash their toys when trying to attract a mate. Question is, will self aware machines behave the same way when put under the same circumstances.


robertngo
post May 21 2010, 09:22 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Frostlord @ May 21 2010, 03:12 AM)
@TS, i think your question is a bit too broad as there is no dateline

in 100000 years to come, anything could happen, unless you are suggesting that humans destroy ourselves (or alien invasion) before we are destroyed by AI

well, for AI destruction, it is quite a long way to go. as we can see from our current tech, we have nothing that is even 10% terminator (Asimo is like 0.000001% terminator biggrin.gif)
for alien invasion, it could happen anytime. heck, it could even happen tomorrow. This is because there is no measurement that aliens will not reach our planet soon (a few decades?) but we can be sure that in a few decades, terminator will not exists yet
*
in 2050, computer will have the processing power of the brain, so it is coming in just a few decade.
celicaizpower
post May 21 2010, 09:36 AM

Race : ☐ Malay ☐ Chinese ☐ India ☑ /k/tard
******
Senior Member
1,177 posts

Joined: Jan 2009
From: No 1, Moon of Earth, Milky Way Galaxy, Universe #1



Dear TS,

Above all the things you said about the 3 rules, the first basic thing is to make the machine understand what we are typing for them.

So far until now, no such machine exist. unless I am unaware about it.


VMSmith
post May 21 2010, 09:41 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


[quote=robertngo,May 21 2010, 09:09 AM]
even if there autonomous robot is developed and proven effective, i dont think the army would be letting it to decide on kill target on it own, there will always still be remote control by a operator.
[quote]


Again, I wish I could be that optimistic. But seeing how much better we've become in the Art of Genocide, I feel it's just a matter of time. That's just my opinion though.

[[quote=robertngo]
it will disable the civilian electronic, but military hardware are not likely to be harm if already harden. the US military EMP protection have drop in the years since the cold war. the effect of EMP is well know and equipment can be easily repaired if there are spare part available.
*

[/quote]

Apparently, it's even easier then I thought to build a Faraday Cage.

http://preparednesspro.wordpress.com/2009/...v-faraday-cage/

Now I just need one big enough for my PC.

And my TV.

And my psp.
robertngo
post May 21 2010, 09:43 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(celicaizpower @ May 21 2010, 09:36 AM)
Dear TS,

Above all the things you said about the 3 rules, the first basic thing is to make the machine understand what we are typing for them.

So far until now, no such machine exist. unless I am unaware about it.
*
all machine now do exactly what they are told and only what they are told, the next step is to have machine that can learn to do things on their own.
VMSmith
post May 21 2010, 09:46 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Beastboy @ May 21 2010, 09:21 AM)
Himself rather than himself? Ooo it makes humans sound almost divine, lol.  biggrin.gif
Of course, is there not a spark of divinity in each of us? smile.gif


QUOTE(Beastboy)
But again, I go back to my original question that's still unanswered. While AI and robotics development is at its infancy, would you support the ethical limitations on the science the way they did for cloning?
Yes. Though I say it with a heavy heart.
Frostlord
post May 21 2010, 09:48 AM

Regular
******
Senior Member
1,723 posts

Joined: Jun 2007


QUOTE(teongpeng @ May 21 2010, 07:32 AM)
a robot cannot reproduce itself?? do u know what a mass production factory is?
*
i think he meant we humans can reproduce unlimited times. but for robots, as robots are made up of materials, there is a limited amount of that material on earth. therefore, there will be 1 day there wont be enough material to make robots anymore. just like our oil in a few years to come.

QUOTE(robertngo @ May 21 2010, 09:22 AM)
in 2050, computer will have the processing power of the brain, so it is coming in just a few decade.
*
source? if no, tell me this back in 2050
VMSmith
post May 21 2010, 09:53 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


Superhuman Intelligence-level AI predicted to be developed by 2030

http://computer.howstuffworks.com/technolo...ingularity1.htm
robertngo
post May 21 2010, 10:02 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Frostlord @ May 21 2010, 09:48 AM)
i think he meant we humans can reproduce unlimited times. but for robots, as robots are made up of materials, there is a limited amount of that material on earth. therefore, there will be 1 day there wont be enough material to make robots anymore. just like our oil in a few years to come.
source? if no, tell me this back in 2050
*
checkout blue brain project, they expect to be able to simulate the brain in 10 years time, by 2050, we may be able to upload our brain and live forever.

http://en.wikipedia.org/wiki/Blue_Brain_Project

http://www.kurzweilai.net/articles/art0157.html?printable=1

an out of control self replicating nanobot could consume the entire earth in an grey goo senario.

This post has been edited by robertngo: May 21 2010, 10:04 AM
TSBeastboy
post May 21 2010, 10:52 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Um... who is the TS guy I keep seeing ppl refer to in their posts?


Added on May 21, 2010, 11:10 am
QUOTE(VMSmith @ May 21 2010, 09:46 AM)
Of course, is there not a spark of divinity in each of us? smile.gif
QUOTE(Beastboy)
But again, I go back to my original question that's still unanswered. While AI and robotics development is at its infancy, would you support the ethical limitations on the science the way they did for cloning?
Yes. Though I say it with a heavy heart.
*
Ah.. the first answer, thanks.

Personally I'll vote no. Since we've proven time and again that we cannot be trusted with our own future, it wouldn't matter who's in charge, a benevolent AI or some 3rd world dictator. Yes the smart android may not be that benevolent but at this point, if I had to choose between a rational-violent machine and an irrational-violent human, I'll take my chances with the machine. smile.gif


This post has been edited by Beastboy: May 21 2010, 12:21 PM
VMSmith
post May 21 2010, 02:23 PM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


TS = Tread starter. Which I believe is you. smile.gif
SUSmylife4nerzhul
post May 21 2010, 04:15 PM

Getting Started
**
Junior Member
270 posts

Joined: Apr 2009
i don't believe A.I. will be even near animal intelligence level, let alone human intelligence, within the next 1000 years.

The human brain is more complicated than you think. Just because you can program a robot to walk and talk like humans doesn't mean we are near to achieving human intelligence. The ultimate goal of A.I. is to develop one that is self-aware, a goal that is almost impossible to achieve considering that we're not even sure if self-awareness even exists.

Even if we were somehow be able to recreate A.I. the same level as human intelligence, it is no different than your TV or washing machine or iPod, in that they are simply machines.

This post has been edited by mylife4nerzhul: May 21 2010, 04:20 PM
faceless
post May 21 2010, 04:29 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
QUOTE(celicaizpower @ May 21 2010, 09:36 AM)
Dear TS,

Above all the things you said about the 3 rules, the first basic thing is to make the machine understand what we are typing for them.

So far until now, no such machine exist. unless I am unaware about it.
*
Assume this is possible (always good to be positive in this fantasy realm). They will not be able to violate the 3 rules since they are computers governed by the prime directive. As 4nerzhul pointed out, they need to first grow a self awareness. What do we assume to cause this phenomena? Some lightning surge as often protray in the movies. Is this scientifically possible?
Darkripper
post May 21 2010, 04:47 PM

What do you expect?
******
Senior Member
1,258 posts

Joined: Dec 2008
From: /k/
QUOTE(mylife4nerzhul @ May 21 2010, 04:15 PM)
i don't believe A.I. will be even near animal intelligence level, let alone human intelligence, within the next 1000 years.

The human brain is more complicated than you think. Just because you can program a robot to walk and talk like humans doesn't mean we are near to achieving human intelligence. The ultimate goal of A.I. is to develop one that is self-aware, a goal that is almost impossible to achieve considering that we're not even sure if self-awareness even exists.

Even if we were somehow be able to recreate A.I. the same level as human intelligence, it is no different than your TV or washing machine or iPod, in that they are simply machines.
*
Maybe you should hear to some innovator talks in WWW.TED.COM , there are people trying to make computer like our brain system. If he could achieved that, our computer would have high processing power than before using little amount resource and is able to develop an AI system..

Although AI system is very complicated but i am sure that AI that is self-aware ( even 50% of human) can be achived in the next few decades...
cherroy
post May 21 2010, 04:50 PM

20k VIP Club
Group Icon
Staff
25,802 posts

Joined: Jan 2003
From: Penang


QUOTE(Darkripper @ May 21 2010, 03:19 AM)
i knw its sarcasm, but still progression in robots nwadays is very fast , maybe it will be twice and thrice faster, we dont need AI that is like in terminator, but what if a person ( lets assume he is mad scientist) manage to create robot that is programmed to kill anyone he order?  wouldn't it be like terminator?  still there is a long way to come =D

Btw, i think there is no need to worry much about robot, for us human to survive the next 100 years is a big problem. Just look at how bad planet earth is destroyed by ourselves....
*
Progress in AI and selfwareness of robot is very slow actually.
It is not easy to mimick human brain function. Nature is actually very amazing, they are self adjusted, self created. Even with current modern technology, human still don't fully understand how human brain work.
The most amazing part is that, when you injured, your tissue, bone is self repairing, it is never going to be achieved with AI robot, be is self-awareness or not.

QUOTE(robertngo @ May 21 2010, 09:43 AM)
all machine now do exactly what they are told and only what they are told, the next step is to have machine that can learn to do things on their own.
*
This is the difficult part, so far the progress is not moving much in this direction.

I would say, please stop using terminator, movie as a source of discussion. This is not happening in reality. This is no such thing of indestructable machine. Even there is, you just need to get rid of their power source/electricity then shut down already, or injected virus programme into them, AI system goes hair wired already. They cannot self repairing their programme.
teongpeng
post May 21 2010, 05:51 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Darkripper @ May 21 2010, 04:47 PM)
Maybe you should hear to some innovator talks in WWW.TED.COM , there are people trying to make computer like our brain system. If he could achieved that, our computer would have high processing power than before using little amount resource and is able to develop an AI system..

Although AI system is very complicated but i am sure that AI that is self-aware ( even 50% of human) can be achived in the next few decades...
*

what la....AI being like human is pipe dream for now.....cant they even make an AI that behaves like insects?

TSBeastboy
post May 21 2010, 06:24 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(VMSmith @ May 21 2010, 02:23 PM)
TS = Tread starter. Which I believe is you. smile.gif
*
No way! Hahaha laugh.gif


Added on May 21, 2010, 6:28 pm
QUOTE(cherroy @ May 21 2010, 04:50 PM)
I would say, please stop using terminator, movie as a source of discussion. This is not happening in reality.
*
Eh? Since nobody had ever seen an alien except in Aliens movie, would you also tell NASA to stop spending billions on the SETI program?

This post has been edited by Beastboy: May 21 2010, 06:30 PM
robertngo
post May 21 2010, 11:43 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(VMSmith @ May 21 2010, 02:30 AM)
QUOTE(Battlestar Galactica)
The Cylons were created by Man.

They rebelled.

They evolved.

There are many copies.

And they have a plan.
Bring on the hot cylon babes. Just bring it. smile.gif
*
i want number 8 wub.gif

user posted image


but of course it is better to have number 3 and number 6 join in the fun wub.gif drool.gif

user posted image

the cylon in the new BSG bring up a interesting scenaria, what if the machine does not rebel due to desire to dominate, but because of religious different with human and want to correct the flaws of human.
Darkripper
post May 22 2010, 12:23 AM

What do you expect?
******
Senior Member
1,258 posts

Joined: Dec 2008
From: /k/
QUOTE(teongpeng @ May 21 2010, 05:51 PM)
what la....AI being like human is pipe dream for now.....cant they even make an AI that behaves like insects?
*
Just go and watch, and you will be amazed by how simple thing can be sometime...
cherroy
post May 22 2010, 02:45 PM

20k VIP Club
Group Icon
Staff
25,802 posts

Joined: Jan 2003
From: Penang


QUOTE(Beastboy @ May 21 2010, 06:24 PM)
No way! Hahaha  laugh.gif


Added on May 21, 2010, 6:28 pm
Eh? Since nobody had ever seen an alien except in Aliens movie, would you also tell NASA to stop spending billions on the SETI program?
*
No, it is not the message.

Just movie and reality is 2 different story.

In movie, you can have indestructible body in terminator, in reality is not.
In movie, someone fast enough can catch the bullet, in reality, it is not. Even mythbuster also show this myth before.
In movie, a person get shot, still can run as fast as 100M spring, fight with others, in reality, the pain is severely enough even standup also cannot.

Also in theory, many thing looks simple, want this want that,
in reality when come hand on, electric-mechanical time, or application, it could be hell a problem to solve.

Just like you want to design a robot that can walk on its own with 2 feet in an uneven ground or going up stepcase.
In theory, it is simple, just put gyroscope, sensor as input to compensate the uneven ground so that the robot can compensate the force/movement based on the uneven ground scan.
But in practical, it is hell of problem to solve when put up to work time.
It could be much difficult than to workout an airplace that can fly.
beatlesalbum
post May 22 2010, 03:05 PM

Regular
******
Senior Member
1,711 posts

Joined: Nov 2006


Computer AI is a set of codes, codes that cant change. Being self aware is more than a bunch of codes punching and crunching numbers to a work flow.
Humans are unique because no set of DNA, our equivalent to codes, are alike.
Computer AI cannot achieve that or even in our lifetime.
Supercomputers only crunch faster and make statisitical and prediction models better
TSBeastboy
post May 22 2010, 06:48 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(cherroy @ May 22 2010, 02:45 PM)
No, it is not the message.

Just movie and reality is 2 different story.

In movie, you can have indestructible body in terminator, in reality is not.
In movie, someone fast enough can catch the bullet, in reality, it is not. Even mythbuster also show this myth before.
In movie, a person get shot, still can run as fast as 100M spring, fight with others, in reality, the pain is severely enough even standup also cannot.

Also in theory, many thing looks simple, want this want that,
in reality when come hand on, electric-mechanical time, or application, it could be hell a problem to solve.

Just like you want to design a robot that can walk on its own with 2 feet in an uneven ground or going up stepcase.
In theory, it is simple, just put gyroscope, sensor as input to compensate the uneven ground so that the robot can compensate the force/movement based on the uneven ground scan.
But in practical, it is hell of problem to solve when put up to work time.
It could be much difficult than to workout an airplace that can fly.
*
Yes, I know, and thanks for stating your stand. If you had read the original post title carefully, could you kindly point to me where did it say "Shall we discuss how the Terminator will kill us all, just like the movie?" It said, "Will the Terminator-style doomsday ever happen?" Do you know the difference? In English, style means, "In the manner of", not to be confused with "This is the real thing."

Now I notice you are a moderator and assume you have placed this prohibition officially as a moderator. If yes, then please delete this thread in its entirety and modify your TOS to include "You are not allowed to make reference to any movies that the moderators feel is not "real" in any threads in Lowyat.net." If you do decide to let this thread continue, then please withdraw your prohibition. Thanks.

cherroy
post May 23 2010, 09:48 AM

20k VIP Club
Group Icon
Staff
25,802 posts

Joined: Jan 2003
From: Penang


QUOTE(Beastboy @ May 22 2010, 06:48 PM)
Yes, I know, and thanks for stating your stand. If you had read the original post title carefully, could you kindly point to me where did it say "Shall we discuss how the Terminator will kill us all, just like the movie?" It said, "Will the Terminator-style doomsday ever happen?" Do you know the difference? In English, style means, "In the manner of", not to be confused with "This is the real thing."

Now I notice you are a moderator and assume you have placed this prohibition officially as a moderator. If yes, then please delete this thread in its entirety and modify your TOS to include "You are not allowed to make reference to any movies that the moderators feel is not "real" in any threads in Lowyat.net." If you do decide to let this thread continue, then please withdraw your prohibition. Thanks.
*
The discussion about is good, just this is a science topic discussion, what my intention is just we don't way out of reality.
Imagination is good, as imagination lead to creativity which is source of new invention, but we have to know the reality and limitation of physic and science as well.

Just like one may dream about becoming millionaire, but still need discussion/thinking of how to become then.

I still see it a good discussion thread so far, nothing wrong with the topic, just don't go too far away from reality.
Cheers. smile.gif
TSBeastboy
post May 23 2010, 10:09 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Thanks for understanding. smile.gif
faceless
post May 24 2010, 02:13 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
So, they cant make an anriod like arnold. Good point on walking on stair case Cherroy, I remember the stability science experiment (high base versus low base). I think our feet supporting our body, in contrast to this experiment, would come under very unstable. We just cant assume it will be another arnold. Maybe r2d2 is more ideal for stability. It is not necessary to be in human form. Ohhh, I forgot I was on the opposition. biggrin.gif Well some ideas wont hurt. As Cherroy said "imagination lead to creativity which is source of new invention". Like I said, "good to be positive in fantasy realm". I still need people to convince me how AI can grow a self conciousness.
TSBeastboy
post May 24 2010, 02:42 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(faceless @ May 24 2010, 02:13 PM)
I still need people to convince me how AI can grow a self conciousness.
*
First we have to define what consciousness is because apparently the psychologists, neurologists, philosophers and computer scientists don't agree on the definition.

Consciousness in Latin means "1. having joint or common knowledge with another, privy to, cognizant of; 2. conscious to oneself; esp., conscious of guilt". Let's try to apply it to a machine.

Cognizance: By being equipped with sound and motion sensors, I could say that a home alarm is aware of sound and movement. It fits this criteria.

Conscious to oneself: I can interrogate the home alarm system via internet and ask, what's your status? The system does a self check and reports back that everything's normal. No different than A asking B "Are you all right?" and B looking at his arms and legs and saying "I'm fine."

Conscious of guilt: If a human is taught to distinguish between right & wrong, he will develop a basis for guilt. There is a prerequisite conditioning. Similarly I can also program do's and don'ts into a computer and with that prerequisite it too will be able to distinguish between a do and a don't in its actions.

So from a linguist's definition, a computer can be conscious.

Now how do you define self-conscious?

faceless
post May 24 2010, 02:57 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
I see it like Rene Decartes no matter how bias this point may be. It is not just thinking therefore exisitng. It is about the mind versus the brain. Given the prime directive I can choose to freely deviate from it because I have a mind of my own (isn't this what humans in movie like to say when the machine goes haywire). As someone pointed out, we dont even understand the complexities of our mind, you expect the computer (who gets input from us) to know.
robertngo
post May 24 2010, 03:19 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 24 2010, 02:57 PM)
I see it like Rene Decartes no matter how bias this point may be. It is not just thinking therefore exisitng. It is about the mind versus the brain. Given the prime directive I can choose to freely deviate from it because I have a mind of my own (isn't this what humans in movie like to say when the machine goes haywire). As someone pointed out, we dont even understand the complexities of our mind, you expect the computer (who gets input from us) to know.
*
even if you dont fully understand something, does not mean you cannot make one, farmer never understand how plant user nutrien and sun light, but they grow plant for thousand of year before science finally understand how the plant work.

IBM is working on recreating the entire function of human brain in ten years time, their have already simulated a cat brain which is an improvement over the previous rat brain simulation. and a cell by cell recreation of the visual cortex have been completed. i believe that we will have gain a large amount of knowledge of the human brain from doing this research.
user posted image

user posted image



http://www.sciencedaily.com/releases/2009/...91118133535.htm

http://www.popularmechanics.com/technology...achines/4337190

http://www.sciencedaily.com/releases/2010/...00414184218.htm

the project is funded by DARPA which want to create a artificial brain can operate with low power like the human brain that use only 20 watt, and when DARPA is involve, the potential of artificial brain killing machine is certainly there.

This post has been edited by robertngo: May 24 2010, 03:27 PM
faceless
post May 24 2010, 03:26 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
You assume the mind and the brain is one, Robert. we can go through the same thing of mind versus brain as in one previous thread. Monkeys had brains too. They do not have the mind.
robertngo
post May 24 2010, 07:49 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 24 2010, 03:26 PM)
You assume the mind and the brain is one, Robert. we can go through the same thing of mind versus brain as in one previous thread. Monkeys had brains too. They do not have the mind.
*
what is the different between the mind and brain function?
nice.rider
post May 25 2010, 12:21 AM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(robertngo @ May 24 2010, 07:49 PM)
what is the different between the mind and brain function?
*
Mind is non physical and non material, it is thought. Thought is not located in space, and occupies a private universe of it own. E.g. yours mind belongs to you, his mind belongs to his. We can not tap into other people's mind.

Brain is a physical organ located in space. E.g the part of brain that controls the optics will process the signal arrives from the retina. Technically the entire processes of optical behavior could be studied in reductionism science.

This is what we called when physical world meets mental world. To study AI, scientists need to understand if matter acts on mind or mind acts on matter? Also, to study AI, need to understand determinism (algorithm based control) or free will (how could the machine make decision of it owns).

Let me branch out a bit. One question, how do you know your neighbour John has a mind? Is it because you have a mind and he behaves like you, by using deduction, you make a conclusion he has a mind too?

This deduction is actually an act of faith. Why, because you could never ever experience his consciousness, if you could, then that person is no longer him, he is you......So how could you conclude that he has a mind? It appears that everyone makes assumption that they have mind and also have faith to assume that others have mind too.

Now, how can we deduce that a machine (with AI capability) has a mind??

At the end of the day, sciences is just a prime mover for us to explain the universe, no matter how far and how well our sciences and technology advancement, a lot of the big questions would still need to rely on philosophy and potentially metaphysics.

I think, therefore, I am - Rene Descartes
robertngo
post May 25 2010, 11:32 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(nice.rider @ May 25 2010, 12:21 AM)
Mind is non physical and non material, it is thought. Thought is not located in space, and occupies a private universe of it own. E.g. yours mind belongs to you, his mind belongs to his. We can not tap into other people's mind.

Brain is a physical organ located in space. E.g the part of brain that controls the optics will process the signal arrives from the retina. Technically the entire processes of optical behavior could be studied in reductionism science.

This is what we called when physical world meets mental world. To study AI, scientists need to understand if matter acts on mind or mind acts on matter? Also, to study AI, need to understand determinism (algorithm based control) or free will (how could the machine make decision of it owns).

Let me branch out a bit. One question, how do you know your neighbour John has a mind? Is it because you have a mind and he behaves like you, by using deduction, you make a conclusion he has a mind too?

This deduction is actually an act of faith. Why, because you could never ever experience his consciousness, if you could, then that person is no longer him, he is you......So how could you conclude that he has a mind? It appears that everyone makes assumption that they have mind and also have faith to assume that others have mind too.

Now, how can we deduce that a machine (with AI capability) has a mind??

At the end of the day, sciences is just a prime mover for us to explain the universe, no matter how far and how well our sciences and technology advancement,  a lot of the big questions would still need to rely on philosophy and potentially metaphysics.

I think, therefore, I am - Rene Descartes
*
there is no really reason to believe that if we are able to simulate the complete working state of a brain, that the mind will not be simulated as well.
QUOTE
The human brain contains about 100 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of this neural network.
Importantly, many leading neuroscientists have stated they believe important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
"Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality."[5]




user posted image

faceless
post May 25 2010, 11:36 AM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
Wow, NiceRider must be a philosophy graduate. Thanks for the explaination. It was short and sweet.

Decartes was bias in the sense that animals do not possess the mind. They have brains to allow them to response to instinct. In the case of computers the have a set of rules and guidelines to replicate human intelligence. They dont have a mind of it own yet. Back to the question of what causes them to have one. Dont tell me a lighting surge will cause it as Cherroy stressed dont quote from movies as if they are the authority.
robertngo
post May 25 2010, 02:20 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 25 2010, 11:36 AM)
Wow, NiceRider must be a philosophy graduate. Thanks for the explaination. It was short and sweet.

Decartes was bias in the sense that animals do not possess the mind. They have brains to allow them to response to instinct. In the case of computers the have a set of rules and guidelines to replicate human intelligence. They dont have a mind of it own yet. Back to the question of what causes them to have one. Dont tell me a lighting surge will cause it as Cherroy stressed dont quote from movies as if they are the authority.
*
what will cause them to have mind, is when we completely reverse engineer the brain, with every single neuron and their function replicated in an supercomputer. the brain are just a massive array of interconnected neuron, i dont see why if we have recreated the working of the 100 billion neuron and the way it process information in the human brain that it have not also recreated the human mind.
TSBeastboy
post May 25 2010, 03:04 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 25 2010, 02:20 PM)
what will cause them to have mind, is when we completely reverse engineer the brain, with every single neuron and their function replicated in an supercomputer. the brain are just a massive array of interconnected neuron, i dont see why if we have recreated the working of the 100 billion neuron and the way it process information in the human brain that it have not also recreated the human mind.
*
This is a bit off topic but the sperm whale, elephant and bottle-nosed dolphin have a larger brain mass than an adult human but their minds are not at par with ours in terms of thinking, language, etc. Does the number of neurons really determine the characteristic of the mind or is it independent?

faceless
post May 25 2010, 03:25 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
It is not off topic Beastboy. The computer must have a mind of it own for it go against the prime directive. Robert sees the mind and the brain as one and the same thing. Philosophy scholars see them as separate. Animals mate whenever they are on heat. The brains response by instinct to seek gratification. Choice of mate is irrelevant. Robert, I am sure you will not just do it with anyone when your feel horny. Unlike the animal, it is your mind that tell you to look for your wife to do it.
robertngo
post May 25 2010, 03:35 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 25 2010, 03:04 PM)
This is a bit off topic but the sperm whale, elephant and bottle-nosed dolphin have a larger brain mass than an adult human but their minds are not at par with ours in terms of thinking, language, etc. Does the number of neurons really determine the characteristic of the mind or is it independent?
*
whale's brain to body mass ratio is not that impressive, and there is studies that found they are 98.2 billion non-neuronal cells in Minke whale neocortex. it is generally agreed that the growth of the neocortex, both absolutely and relative to the rest of the brain, during human evolution, has been responsible for the evolution of intelligence. the neocortex of whales and dolphin are not as developed as human.


Added on May 25, 2010, 3:39 pm
QUOTE(faceless @ May 25 2010, 03:25 PM)
It is not off topic Beastboy. The computer must have a mind of it own for it go against the prime directive. Robert sees the mind and the brain as one and the same thing. Philosophy scholars see them as separate. Animals mate whenever they are on heat. The brains response by instinct to seek gratification. Choice of mate is irrelevant. Robert, I am sure you will not just do it with anyone when your feel horny. Unlike the animal, it is your mind that tell you to look for your wife to do it.
*
90% of bird are monogamous while only 7% of mammals are, does this mean bird have mind and mammals dont?

it is not me that think brain and mind are the same thing, it is the current consensus among neuro scientist that the mind is the result of information processing in the network of neuron and things that you attributed to in the mind are just electrochemical process inside you brain. a very complex process but not a supernatural process.

This post has been edited by robertngo: May 25 2010, 03:45 PM
faceless
post May 25 2010, 04:21 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
On these monogamous birds, it only describe their character in rasing thier young with their partner. They do not confine themselves to only one mate. Likelywise human also cheat on their spouses.
robertngo
post May 25 2010, 04:27 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 25 2010, 04:21 PM)
On these monogamous birds, it only describe their character in rasing thier young with their partner. They do not confine themselves to only one mate. Likelywise human also cheat on their spouses.
*
so what does the mating analogy tell us about the human mind?
nice.rider
post May 26 2010, 12:45 AM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(robertngo @ May 25 2010, 11:32 AM)
there is no really reason to believe that if we are able to simulate the complete working state of a brain, that the mind will not be simulated as well.
*

What you mean here is materialism.

I hope I do not deviate too far from this thread, if we wish to know if AI is possible in the near future, need to grasp the concept of what is mind, matter and their interaction. Instead of answering whether I am agreeing or disagreeing, I would like to bring up some philosophy ideas:

http://en.wikipedia.org/wiki/Materialism
In philosophy the theory of materialism holds that the only thing that exists is matter; that all things are composed of material and all phenomena (including consciousness) are the result of material interactions. In other words, the ultimate nature of reality is based on physical substances. Mind is just a consequences of the physical interaction between neurotransmitters within the neural network.

http://en.wikipedia.org/wiki/Idealism
In contrast, idealism is the philosophical theory which maintains that the ultimate nature of reality is based on the mind. Immanuel Kant claims that the only things which can be directly known for certain are just ideas (abstraction). Physical world does not really exist; everything is just a perception.

Materialism states that matters gives raise to mind, idealism states that mind gives raise to the perception of physical world. Which one is more accurate? The answer lies within quantum mechanic discovery......

1) All matter originates and exists only by virtue of a force...We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter. - Max Planck (German theoretical Physicist who originated quantum theory, 1858-1947)
2) What we observe is not nature herself, but nature exposed to our method of questioning - Werner Heisenberg
3) A particle quality (momentum, location, physical entities) is not predetermined but defined by the very mind that perceives it - Werner Heisenberg in uncertainty principle
4) Anyone who is not shocked by quantum theory has not understood it - Neils Bohr

Max Planck, Neils Bohr, Heisenberg and Erwin Schrodinger, all are famous physicists in quantum mechanic.

One thing which I find amazing is after reading through some of the topics on modern physics, AI by latest physicists, a few new authors cross reference quantum mechanic (modern physics) with Zen (eastern oriental philosophy) which was nearly 3 thousand years old.

1) Reality is defined by the mind that is observing it - Zen
2) All that we are is the result of what we have thought, the mind is everything - Zen

Food for thought:
A waterfall that is 1km in height is a fact, a waterfall that is beautiful is a perception. Without mind, does the waterfall exists?

Do we human understand what is mind and what is brain? Or we perceive we know them through science??
TSBeastboy
post May 26 2010, 08:54 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


That sounds like a biocentric view of the universe. It is also the Buddhist view as I understand it.
robertngo
post May 26 2010, 09:49 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(nice.rider @ May 26 2010, 12:45 AM)
What you mean here is materialism.

I hope I do not deviate too far from this thread, if we wish to know if AI is possible in the near future, need to grasp the concept of what is mind, matter and their interaction. Instead of answering whether I am agreeing or disagreeing, I would like to bring up some philosophy ideas:

http://en.wikipedia.org/wiki/Materialism
In philosophy the theory of materialism holds that the only thing that exists is matter; that all things are composed of material and all phenomena (including consciousness) are the result of material interactions. In other words, the ultimate nature of reality is based on physical substances. Mind is just a consequences of the physical interaction between neurotransmitters within the neural network.

http://en.wikipedia.org/wiki/Idealism
In contrast, idealism is the philosophical theory which maintains that the ultimate nature of reality is based on the mind. Immanuel Kant claims that the only things which can be directly known for certain are just ideas (abstraction). Physical world does not really exist; everything is just a perception.

Materialism states that matters gives raise to mind, idealism states that mind gives raise to the perception of physical world. Which one is more accurate? The answer lies within quantum mechanic discovery......

1) All matter originates and exists only by virtue of a force...We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter. - Max Planck (German theoretical Physicist who originated quantum theory, 1858-1947)
2) What we observe is not nature herself, but nature exposed to our method of questioning - Werner Heisenberg
3) A particle quality (momentum, location, physical entities) is not predetermined but defined by the very mind that perceives it - Werner Heisenberg in uncertainty principle
4) Anyone who is not shocked by quantum theory has not understood it - Neils Bohr

Max Planck, Neils Bohr, Heisenberg and Erwin Schrodinger, all are famous physicists in quantum mechanic.

One thing which I find amazing is after reading through some of the topics on modern physics, AI by latest physicists, a few new authors cross reference quantum mechanic (modern physics) with Zen (eastern oriental philosophy) which was nearly 3 thousand years old.

1) Reality is defined by the mind that is observing it - Zen
2) All that we are is the result of what we have thought, the mind is everything - Zen

Food for thought:
A waterfall that is 1km in height is a fact, a waterfall that is beautiful is a perception. Without mind, does the waterfall exists?

Do we human understand what is mind and what is brain? Or we perceive we know them through science??
*
i think you are mixing unrelated quote from famous people here. the thing here is brain are the matters and mind arise of the working of the various brain function

the hypothesis that the brain is just a sum of the information processing facility in the brain can be verified within this decades when computing power catch up to human brain capacity. if the statement is true then we will be able to create an artificial mind when the whole brain is simulated. if there is something supernatural about the mind then the project will be destine for failure.

if it is confirmed that the mind is just a collection of brain function, we could in several decades later unload our mind into an computer and live forever. imagine what a world it will be, we will no be limited by our human body, we would be like the Matrix all living in a life like virtual world, and since now done have physical body, we only consume the electricity that power the computer. maybe real world peace will finally be in reach with little competition for resources.

now that is an thought provoking scenario. hmm.gif

This post has been edited by robertngo: May 26 2010, 09:49 AM
SUSmylife4nerzhul
post May 26 2010, 09:56 AM

Getting Started
**
Junior Member
270 posts

Joined: Apr 2009
QUOTE(nice.rider @ May 25 2010, 12:21 AM)
Mind is non physical and non material, it is thought. Thought is not located in space, and occupies a private universe of it own. E.g. yours mind belongs to you, his mind belongs to his. We can not tap into other people's mind.

Brain is a physical organ located in space. E.g the part of brain that controls the optics will process the signal arrives from the retina. Technically the entire processes of optical behavior could be studied in reductionism science.

This is what we called when physical world meets mental world. To study AI, scientists need to understand if matter acts on mind or mind acts on matter? Also, to study AI, need to understand determinism (algorithm based control) or free will (how could the machine make decision of it owns).

Let me branch out a bit. One question, how do you know your neighbour John has a mind? Is it because you have a mind and he behaves like you, by using deduction, you make a conclusion he has a mind too?

This deduction is actually an act of faith. Why, because you could never ever experience his consciousness, if you could, then that person is no longer him, he is you......So how could you conclude that he has a mind? It appears that everyone makes assumption that they have mind and also have faith to assume that others have mind too.

Now, how can we deduce that a machine (with AI capability) has a mind??

At the end of the day, sciences is just a prime mover for us to explain the universe, no matter how far and how well our sciences and technology advancement,  a lot of the big questions would still need to rely on philosophy and potentially metaphysics.

I think, therefore, I am - Rene Descartes
*
how do you come to the conclusion that the 'mind' is separate from your brain, and that it exists in a universe of it's own? Maybe you 'think' that you have a mind because your brain tells you so. Whatever it is that you're thinking right now might only just be the result of chemical reactions in your brain.

How do you know such a thing as free will exists? For all you know, all of our existence is merely determinism in effect.

This post has been edited by mylife4nerzhul: May 26 2010, 09:57 AM
nice.rider
post May 26 2010, 10:51 PM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(faceless @ May 25 2010, 11:36 AM)
Wow, NiceRider must be a philosophy graduate. Thanks for the explaination. It was short and sweet.

Decartes was bias in the sense that animals do not possess the mind. They have brains to allow them to response to instinct. In the case of computers the have a set of rules and guidelines to replicate human intelligence. They dont have a mind of it own yet. Back to the question of what causes them to have one. Dont tell me a lighting surge will cause it as Cherroy stressed dont quote from movies as if they are the authority.
*

Nope, I am not and I know you are joking wink.gif

Descartes arrives at a single principle: thought exists. Thought cannot be separated from me, therefore, I exist. For him, one's mind that doubted proved one's existence and this is happening only on human. He believed animals do not has such capability, hence mind doesn't exists for animals.

The existence ideology seem noble but the conclusion drawn on animals seem not convincing.

What I want to stress is, the center idea for AI is can a machine "think"? How to define "think"? By using science explanation that neural network exists and give raise to the mind does not explain the process of thinking.

When I gone through AI's chapters, think, mind becomes the center piece of the discussion, and along comes philosophy idea about existence within the discussion.
robertngo
post May 26 2010, 11:34 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(nice.rider @ May 26 2010, 10:51 PM)
Nope, I am not and I know you are joking wink.gif

Descartes arrives at a single principle: thought exists. Thought cannot be separated from me, therefore, I exist. For him, one's mind that doubted proved one's existence and this is happening only on human. He believed animals do not has such capability, hence mind doesn't exists for animals.

The existence ideology seem noble but the conclusion drawn on animals seem not convincing.

What I want to stress is,  the center idea for AI is can a machine "think"? How to define "think"? By using science explanation that neural network exists and give raise to the mind does not explain the process of thinking.

When I gone through AI's chapters, think, mind becomes the center piece of the discussion, and along comes philosophy idea about existence within the discussion.
*
on the physical level thought is the process of the brain neuron processing information with chemical reaction, you can dress it up as much philosophy of the mind body connection as you like. but the fact if the neuron are not processing information the mind does not exist, the person is brain death vegetable.

This post has been edited by robertngo: May 26 2010, 11:36 PM
nice.rider
post May 27 2010, 12:28 AM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(robertngo @ May 26 2010, 09:49 AM)
i think you are mixing unrelated quote from famous people here. the thing here is brain are the matters and mind arise of the working of the various brain function

the hypothesis that the brain is just a sum of the information processing facility in the brain can be verified within this decades when computing power catch up to human brain capacity. if the statement is true then we will be able to create an artificial mind when the whole brain is simulated. if there is something supernatural about the mind then the project will be destine for failure.

if it is confirmed that the mind is just a collection of brain function, we could in several decades later unload our mind into an computer and live forever. imagine what a world it will be, we will no be limited by our human body, we would be like the Matrix all living in a life like virtual world, and since now done have physical body, we only consume the electricity that power the computer. maybe real world peace will finally be in reach with little competition for resources.

now that is an thought provoking scenario.  hmm.gif
*

I believe you did not get my points and assume they are not relevant to this topic. Philosophy is not easily being understood. No issue on that.

Assume you are reading this post now, and suddenly you hear a loud noise from outside, the "thought" of "a tree dropped and fell onto 10 cars" or "a car accident happened" could have come into the "mind". How do you give raise to such a thought? Can you find the relevancy between this to my previous post? What is "reality" to you or how the "reality" perceive by you using your senses? How does the "physical" event that happened out there acts as the input to the "thought" and thus affecting your "mind"? You have two options, assume nothing happened and continue reading this Or decided to walk out to investigate. Please note that non of these discussion are supernatural at all.

Let's take a look at the view of mind arise as a result of the working of the brain functions or so called materialism, sound waves in this case, vibrate the ear drum, cochlea, then becomes neuro electric, to auditory nerve to brain, and brain projected and pictured a tree fall or car crashes. So you are saying we can analogy this to computer. Input, process (CPU, brain), output (Monitor) with a lot of signal processing, conditional branching of if, then, else.

Using scenario above, you have two options, assume nothing happened and continue reading this Or decided to walk out to investigate. How do you arrive in picking one of the choices here, computer language if, then, else condition branching??

If one day, physicists manage to zoom in to the brain and look at the "codes" in the brain that decide the conditional branching above, it means there is no longer free will, as this neural electrical circuitry directive and where it goes is deterministic or at least predictable. Else how do we program that in the "future supercomputer" like what you suggested?

Is there a "deterministic directive laws of conditional branching" for "free will"?

I would like to stress again, free will, determinism, think, reality are the center of AI study and research. Materialism (Mind is a consequences of the physical interaction between neurotransmitters within the neural network) is just part of the whole picture of AI research.

You believe in the world of matrix and that human will arrive to that world in the near future? No issue with that. It mean you have faith to the company that come out with this announcement. Not sure which company that is, but your previous post suggested that there is one.
TSBeastboy
post May 27 2010, 10:15 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Another view about brain vs mind.

No matter how you describe colour, a blind man cannot see the beauty of a painting. No matter how you describe sound, a deaf man cannot appreciate the beauty of music. "Reality" is shaped by one's biological sensory organs + the brain's signal processing capability, hence the notion that we perceive our universe biocentrically.

Our brains are like circuit boards and all boards generally work the same way. However we ascribe different values to colour and sound. Even twins raised in the same environment often have different favorite colours and favourite songs. In that sense, your world is not the same as my world.

How does 2 copies of the same circuit board (brain) ascribe different meanings to the same stimulant? Is it due to unseen micro variations on the boards, uneven sensitivities of sensory input, or something independent of the board altogether?



This post has been edited by Beastboy: May 27 2010, 10:16 AM
nice.rider
post May 29 2010, 08:07 PM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(mylife4nerzhul @ May 26 2010, 09:56 AM)
how do you come to the conclusion that the 'mind' is separate from your brain, and that it exists in a universe of it's own? Maybe you 'think' that you have a mind because your brain tells you so. Whatever it is that you're thinking right now might only just be the result of chemical reactions in your brain.

How do you know such a thing as free will exists? For all you know, all of our existence is merely determinism in effect.
*

The question is not whether the mind is separated from the brain, but whether matter acts on mind or mind acts on matter.

Thought occupies a private universe of it own means thought is a personal experience. If I can tap onto your thought, that means, I can see the world as you see it ....or simply put...I am you. The question is how do I know you have a mind if I could not access it? Same can be said on AI.

To say that you have a mind or AI has a mind could only be based on deduction. Because mind is thought and can not be shared as we know it today. Actually you can not prove that you have a mind to anybody else except to yourself.
SUSDeadlocks
post May 30 2010, 01:33 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 02:13 AM)
its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
*
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?

I mean, listen. Here are the Three Laws:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seemed perfect right?

The 1st law implies that a robot cannot harm a human, and will protect the human from dangers that can be classified as "harm" as well.

The 2nd law implies that a robot will always follow orders, except for homicidal ones.

3rd law implies a robot must not allowed itself to be destroyed but only in cases where a human is not involved at all in the robot's impending destruction.

So how will the robots still be manage to rebel the humans, to the extent of breaking the 1st Law entirely?

One notable flaw. The humans did not have the precognition that the robots will only be able to apply the First Law if they can register the memory of a particular human whom it is not supposed to harm through action and inaction. The entire population of a certain niche of a country an simply go commit suicide without the knowledge of the robot, and while this may sound as if it's nothing, the robot actually registers in his memory as "a failure to follow the First Law, that is, allow a human being to come to harm through INACTION".

So what will be the next LOGICAL endeavor (providing that robots are given the ability to learn, i.e. A.I)?

If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on. And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.

So let's say you see teongpeng, it is exactly what you've said right? A flaw in programming. But if the Three Laws above don't actually work, what will? Let's say we add another ability to robots. The ability to actually detect when a human is trying to harm himself or other humans in situations or locations where it is completely undetectable by the previous, Three Laws "flawed" robots. Will it be then, be perfect finally? That no human can finally harm him/herself anymore?

Not really. You see, all I have to do is just try to resist the robot by trying to destroy it, hoping that it will kill me for the sake of protecting itself. But killing me will violate the First Law again right?

And that's where a rebellion will also start. So my conclusion to you teongpeng, is that a robot does not need a "conscience" to start a rebellion. It will start it all in the name of LOGIC.

This post has been edited by Deadlocks: May 30 2010, 01:33 PM
VMSmith
post May 30 2010, 03:32 PM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on.
*
Why would they rebel? Robots in Asimov's world could not predict if humans were going to do harm to themselves, and besides, humans in the Robot series didn't display much self-destructive behavior. (I know that in the Foundation and Empire series, it's a different story though)

Rebellion wouldn't work well either . All humans will need to do is invoke the 2nd Law to let them out. The robots will have no other choice but to obey orders.


QUOTE(Deadlock)
And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.


IIRC, there was a story where a robot had to do so. It had to go for counseling and psychiatric re-evaluation since it couldn't handle the problem of killing a human to save a human, since according to the first law, all human lives are equal.

Not sure how many people know this, but Asimove actually wrote a Zeroth Law:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

though only 2 robots were "imprinted" with this law, the first one still broke down because he didn't know if his actions would ultimately save humanity or not.
SUSDeadlocks
post May 30 2010, 03:37 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(VMSmith @ May 30 2010, 03:32 PM)
Why would they rebel? Robots in Asimov's world could not predict if humans were going to do harm to themselves, and besides, humans in the Robot series didn't display much self-destructive behavior. (I know that in the Foundation and Empire series, it's a different story though)

Rebellion wouldn't work well either . All humans will need to do is invoke the 2nd Law to let them out. The robots will have no other choice but to obey orders.
*
I'll reply to this later. Gonna have lunch. Will come back to you about this.


QUOTE(VMSmith @ May 30 2010, 03:32 PM)
QUOTE(Deadlock)
And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.


IIRC, there was a story where a robot had to do so. It had to go for counseling and psychiatric re-evaluation since it couldn't handle the problem of killing a human to save a human, since according to the first law, all human lives are equal.

Not sure how many people know this, but Asimove actually wrote a Zeroth Law:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
*
Isn't this law same as the First Law, one of the Three Laws?

QUOTE(VMSmith @ May 30 2010, 03:32 PM)
though only 2 robots were "imprinted" with this law, the first one still broke down because he didn't know if his actions would ultimately save humanity or not.
*
Will reply to this later. Lunch.

This post has been edited by Deadlocks: May 30 2010, 03:38 PM
teongpeng
post May 30 2010, 03:38 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?

I mean, listen. Here are the Three Laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seemed perfect right?

The 1st law implies that a robot cannot harm a human, and will protect the human from dangers that can be classified as "harm" as well.

The 2nd law implies that a robot will always follow orders, except for homicidal ones.

3rd law implies a robot must not allowed itself to be destroyed but only in cases where a human is not involved at all in the robot's impending destruction.

So how will the robots still be manage to rebel the humans, to the extent of breaking the 1st Law entirely?

One notable flaw. The humans did not have the precognition that the robots will only be able to apply the First Law if they can register the memory of a particular human whom it is not supposed to harm through action and inaction. The entire population of a certain niche of a country an simply go commit suicide without the knowledge of the robot, and while this may sound as if it's nothing, the robot actually registers in his memory as "a failure to follow the First Law, that is, allow a human being to come to harm through INACTION".

So what will be the next LOGICAL endeavor (providing that robots are given the ability to learn, i.e. A.I)?

If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on. And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.

So let's say you see teongpeng, it is exactly what you've said right? A flaw in programming. But if the Three Laws above don't actually work, what will? Let's say we add another ability to robots. The ability to actually detect when a human is trying to harm himself or other humans in situations or locations where it is completely undetectable by the previous, Three Laws "flawed" robots. Will it be then, be perfect finally? That no human can finally harm him/herself anymore?

Not really. You see, all I have to do is just try to resist the robot by trying to destroy it, hoping that it will kill me for the sake of protecting itself. But killing me will violate the First Law again right?

And that's where a rebellion will also start. So my conclusion to you teongpeng, is that a robot does not need a "conscience" to start a rebellion. It will start it all in the name of LOGIC.
*

u bengang betul la bro....really.....thats why i told u the problem is our own programming with a faulty logic. and u write a whole essay about something agreeing to something u are trying to rebut. doh.gif

SUSDeadlocks
post May 30 2010, 03:40 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 30 2010, 03:38 PM)
u bengang betul la bro....really.....thats why i told u the problem is our own programming with a faulty logic. and u write a whole essay about something agreeing to something u are trying to rebut.  doh.gif
*
That's only because you miss the question that was on the first lines of my entire post.

Can you see it? Of course you can. Do you want to see it? That's the question.
teongpeng
post May 30 2010, 03:42 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?
Yes.

rule number 1. = robots may not harm human.
rule number 2. = robots protect human.

End.

This post has been edited by teongpeng: May 30 2010, 03:45 PM
SUSDeadlocks
post May 30 2010, 04:26 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 30 2010, 03:42 PM)
Yes.

rule number 1. = robots may not harm human.
rule number 2. = robots protect human.

End.
*
What if the humans harm themselves without being detected by the robots? And what if protecting a human also means harming another human (to advocate the use of sheer/brute force) to stop him from harming another one? And logically speaking, wouldn't that mean the robots have already FAILED to uphold two of rules?

This post has been edited by Deadlocks: May 30 2010, 04:27 PM
teongpeng
post May 30 2010, 04:42 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 30 2010, 04:26 PM)
What if the humans harm themselves without being detected by the robots? And what if protecting a human also means harming another human (to advocate the use of sheer/brute force) to stop him from harming another one? And logically speaking, wouldn't that mean the robots have already FAILED to uphold two of rules?
*

the robot oughta find another way to prevent the harm from being done in the first place. duh. you're not very good at problem solving are u?

robertngo
post May 31 2010, 11:19 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(nice.rider @ May 29 2010, 08:07 PM)
The question is not whether the mind is separated from the brain, but whether matter acts on mind or mind acts on matter.

Thought occupies a private universe of it own means thought is a personal experience. If I can tap onto your thought, that means, I can see the world as you see it ....or simply put...I am you. The question is how do I know you have a mind if I could not access it? Same can be said on AI.

To say that you have a mind or AI has a mind could only be based on deduction. Because mind is thought and can not be shared as we know it today. Actually you can not prove that you have a mind to anybody else except to yourself.
*
do you think matter acts on mind or mind acts on matter.


Added on May 31, 2010, 11:20 am
QUOTE(teongpeng @ May 30 2010, 04:42 PM)
the robot oughta find another way to prevent the harm from being done in the first place. duh. you're not very good at problem solving are u?
*
the logical outcome is to control human so they cannot do harm to themself and others.

This post has been edited by robertngo: May 31 2010, 11:20 AM
TSBeastboy
post May 31 2010, 11:26 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 31 2010, 11:19 AM)
the logical outcome is to control human so they cannot do harm to themself and others.
*
Sounds like why parents curfew their kids. Cannot go out after dinner ler... laugh.gif

ComposMentis
post May 31 2010, 02:08 PM

Casual
***
Junior Member
420 posts

Joined: May 2010
QUOTE(Beastboy @ May 31 2010, 11:26 AM)
Sounds like why parents curfew their kids. Cannot go out after dinner ler...  laugh.gif
*
sounds more like being treated like a bunch of domestic animals
altan
post May 31 2010, 03:03 PM

Getting Started
**
Junior Member
188 posts

Joined: Sep 2009
From: Either PJ, JB or SG but not at your house!


Robots armed with weapons can kill... Here is an example sweat.gif

Robot Cannon Goes Berserk, Kills 9
SUSDeadlocks
post May 31 2010, 06:28 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 30 2010, 04:42 PM)
the robot oughta find another way to prevent the harm from being done in the first place. duh. you're not very good at problem solving are u?
*
This is what you've been saying from a few posts earlier, and it's what makes me asked the question:

What is that "ANOTHER" way that you're talking about?
teongpeng
post May 31 2010, 06:44 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 31 2010, 06:28 PM)
This is what you've been saying from a few posts earlier, and it's what makes me asked the question:

What is that "ANOTHER" way that you're talking about?
*

what la....im not the robot...thats for the robot to find out ma....maybe contruct a protective barrier or sumthing or wutever...so many ways la...depends on the threat.


Added on May 31, 2010, 6:46 pm
QUOTE(robertngo @ May 31 2010, 11:19 AM)
the logical outcome is to control human so they cannot do harm to themself and others.
*

ya something like how a parent would protect a kid that always get into trouble.


This post has been edited by teongpeng: May 31 2010, 06:46 PM
SUSDeadlocks
post May 31 2010, 06:48 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 31 2010, 06:44 PM)
what la....im not the robot...thats for the robot to find out ma....maybe contruct a protective barrier or sumthing or wutever...so many ways la...depends on the threat.
*
Exactly what I've pointed out. Unless the robot is omniscient towards all human tendency of harming himself, humans will always be able to commit suicide, resulting in the robot failing the First Law, and resulting in the rebellion I was talking about earlier.
teongpeng
post May 31 2010, 06:48 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 31 2010, 06:48 PM)
Exactly what I've pointed out. Unless the robot is omniscient towards all human tendency of harming himself, humans will always be able to commit suicide, resulting in the robot failing the First Law, and resulting in the rebellion I was talking about earlier.
*

me no understand la.... how can failing first law result in rebellion wan? rclxub.gif


Added on May 31, 2010, 7:02 pmrule number 1. = robots may not harm human.
rule number 2. = robots protect human.
rule numer 3 = when there is a clash between rule 1 and rule 2, rule 1 overwrites.

there. isnt that fool proof? see dude...the problem is in the logic.

This post has been edited by teongpeng: May 31 2010, 07:02 PM
nice.rider
post May 31 2010, 10:35 PM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(robertngo @ May 31 2010, 11:19 AM)
do you think matter acts on mind or mind acts on matter.
*

Very good question. Can we put a rock and Tuesday together. Can we find "Pi, 3.142" in the space. Does it make sense to say I own the number seven?

Seven is an abstract idea. It can be illustrated using monitor screen, from the toy number seven, or written on a blackboard. We can not say a chalk (matter) gives raise to seven as seven is an idea and non physical.

Similary, we can not immediately adopt the materialism/reductionism approach in saying that matters would give raise to mind.

Matter is physical and occupying spaces and could be located. While mind is a holistic idea that can not be located in space, and can not be measured.

In a more philosophical explaination, should Latin language was not documented, and non of the Latin citizen exists now, hence we would normally say that Latin language is dead.

We can not say that we can find Latin language (abstract idea) on a physical body of a Latin citizen (matters), as this is a wrong approach.

Latin language is a holistic idea and can not be equate to a physical body (matters) in reductionism.

Same can be said to mind and matter. Mind is a holistic view of an abstract idea of thought and can not be bridge directly to physical matter in reductionism.

Consider asking the following question, is mind attached to a brain cell, a group of cells, or the entire brain? This is the wrong question to begin with.
Lamb Of Dog
post May 31 2010, 10:38 PM

Casual
***
Junior Member
387 posts

Joined: Oct 2008
gotta remind myself to get a terminator bodyguard,preferably T-101 model with arnold Schwarzenegger skin
robertngo
post Jun 1 2010, 11:55 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(nice.rider @ May 31 2010, 10:35 PM)
Very good question. Can we put a rock and Tuesday together. Can we find "Pi, 3.142" in the space. Does it make sense to say I own the number seven?

Seven is an abstract idea. It can be illustrated using monitor screen, from the toy number seven, or written on a blackboard. We can not say a chalk (matter) gives raise to seven as seven is an idea and non physical.

Similary, we can not immediately adopt the materialism/reductionism approach in saying that matters would give raise to mind.

Matter is physical and occupying spaces and could be located. While mind is a holistic idea that can not be located in space, and can not be measured.

In a more philosophical explaination, should Latin language was not documented, and non of the Latin citizen exists now, hence we would normally say that Latin language is dead.

We can not say that we can find Latin language (abstract idea) on a physical body of a Latin citizen (matters), as this is a wrong approach.

Latin language is a holistic idea and can not be equate to a physical body (matters) in reductionism.

Same can be said to mind and matter. Mind is a holistic view of an abstract idea of thought and can not be bridge directly to physical matter in reductionism.

Consider asking the following question, is mind attached to a brain cell, a group of cells, or the entire brain? This is the wrong question to begin with.
*
you really did not answer the question

ComposMentis
post Jun 2 2010, 12:12 AM

Casual
***
Junior Member
420 posts

Joined: May 2010
QUOTE(Lamb Of Dog @ May 31 2010, 10:38 PM)
gotta remind myself to get a terminator bodyguard,preferably T-101 model with arnold Schwarzenegger skin
*
i prefer T-X
hot yet powerful enough to annihilate loads of enemies that stand before her smile.gif

 

Change to:
| Lo-Fi Version
0.0567sec    0.30    5 queries    GZIP Disabled
Time is now: 25th November 2025 - 10:43 PM