Welcome Guest ( Log In | Register )

6 Pages  1 2 3 > » Bottom

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
TSBeastboy
post May 18 2010, 10:56 AM, updated 16y ago

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)


robertngo
post May 18 2010, 11:16 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 10:56 AM)
Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)
*
even it is to happen it would be decades away from AI to be smarter that human. we are still one or two decades away from building a supercomputer that to completely simulate the human brain. even if the robot become self aware, it is still not likely they will all band together and able to gain control off all the computer system.

i think a more likely robot doomsday are self replicate nanobot got out of control and consume all matter on earth. the grey goo scenario.

SUSslimey
post May 18 2010, 11:20 AM


*******
Senior Member
6,914 posts

Joined: Apr 2007
eventually there will be a point where AI is closely equal to human's....
by that time what would human do?
human can still increase the mental power....
or human can join them and be cyborg yeah...
or human can try to make sure that it does not happen....
noobfc
post May 18 2010, 11:24 AM

Peanuts
*****
Senior Member
753 posts

Joined: Jan 2008



it could happen, depends on how fast we can develop the tech for advance ai

but i think ppl will implement fail safe to prevent this from occuring
devil_x
post May 18 2010, 11:29 AM

Casual
***
Junior Member
483 posts

Joined: Jan 2003
From: some where..some place

i think, b4 we get a robotic doomsday, we might get a environmental doomsday. worst of all, we might get a bioweapon doomsday b4 we even need to worry about robotic/AI doomsday. compare to robotic controlling humans, im more concern of bioweapon that are capable of killing targeted humans with specific DNA or specific biological attributes, or a biological weapon that can completely disintegrate a human body leaving no trace or evidence of its existence.

for terminator/matrix style doomsday, its due to human arrogance and ignorant. humans are the architect of their own demise, "the 2nd Renaissance Part 1, Animatrix". in every doomsday scenario, MAN is always the reason behind it. sadly speaking, im not too optimistic that MAN will avoid any of the doomsday they can come out with.
robertngo
post May 18 2010, 11:31 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(slimey @ May 18 2010, 11:20 AM)
eventually there will be a point where AI is closely equal to human's....
by that time what would human do?
human can still increase the mental power....
or human can join them and be cyborg yeah...
or human can try to make sure that it does not happen....
*
some futurist believe that the next step of human evolution is to merge our brain with computer chip that enhance our mental capability.
noobfc
post May 18 2010, 11:33 AM

Peanuts
*****
Senior Member
753 posts

Joined: Jan 2008



QUOTE(robertngo @ May 18 2010, 11:31 AM)
some futurist believe that the next step of human evolution is to merge our brain with computer chip that enhance our mental capability.
*
then there's alot of problems to overcome....remind me of ghost in the shell XD
dreamer101
post May 18 2010, 11:36 AM

10k Club
Group Icon
Elite
15,855 posts

Joined: Jan 2003
QUOTE(Beastboy @ May 18 2010, 10:56 AM)
Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)
*
Beastboy,

WHY does it matters??

Most human beings CHOOSE to live like robot anyhow.. They CHOOSE not to control their lives. Hence, why does it matters that it is control by ROBOT or whatever??

Most human beings do not live. They ONLY exists.

Dreamer
communist892003
post May 18 2010, 12:11 PM

On my way
****
Senior Member
550 posts

Joined: Dec 2008


QUOTE(dreamer101 @ May 18 2010, 12:36 PM)
Beastboy,

WHY does it matters??

Most human beings CHOOSE to live like robot anyhow.. They CHOOSE not to control their lives.  Hence, why does it matters that it is control by ROBOT or whatever??

Most human beings do not live.  They ONLY exists.

Dreamer
*
Guess what, agree rclxms.gif
azerroes
post May 18 2010, 01:44 PM

No sorcery lies beyond my grasp
******
Senior Member
1,105 posts

Joined: Sep 2009


i wonder if our world will vanished before we can achieve our hi-tech imagination
TSBeastboy
post May 18 2010, 02:44 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 11:16 AM)
even it is to happen it would be decades away from AI to be smarter that human. we are still one or two decades away from building a supercomputer that to completely simulate the human brain.
*
IMO, a machine does not need to simulate the human brain or be smarter than it to control humans. It just needs to be smart enough to take over missile launching systems, trip the power stations and shut off water supply.

Its very easy to deny humans control over such facilities. Just embed some code that changes people's passwords. When they're locked out, the machine will be self-operating until it runs out of juice... which can be decades if it draws power from nuclear sources.

A few months ago the US issued an internal security alert after discovering how easily their infrastructure can be crippled because of bad computer code... whether done intentionally or not. The scenario is easy to imagine from a programmer's point of view.

robertngo
post May 18 2010, 03:53 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 02:44 PM)
IMO, a machine does not need to simulate the human brain or be smarter than it to control humans. It just needs to be smart enough to take over missile launching systems, trip the power stations and shut off water supply.

Its very easy to deny humans control over such facilities. Just embed some code that changes people's passwords. When they're locked out, the machine will be self-operating until it runs out of juice... which can be decades if it draws power from nuclear sources.

A few months ago the US issued an internal security alert after discovering how easily their infrastructure can be crippled because of bad computer code... whether done intentionally or not. The scenario is easy to imagine from a programmer's point of view.
*
the critical site like missile silo, power station, water supply, are not networked together and not connected to the internet. all the site use hard to use industrial control software that are no compatible with each other. it is possible for hacker of really advanced AI in the future to gain access to one facility at an time, but to gain access to all of them at the same time like the scenario in die hard 4, you would need to be physically onsite to take control of these control system, maybe the hacker can get lucky with some site that dont have proper network security where a pc connected to the internal network also connected to the internet, they can use this PC to gain access. if there is no access no matter how advance the AI it will not be able to hack the system.

faceless
post May 18 2010, 04:18 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
QUOTE(azerroes @ May 18 2010, 01:44 PM)
i wonder if our world will vanished before we can achieve our hi-tech imagination
*
Some people just love use imagination to take science further. It is call progress through dreams. Boys playing alien or astronaut thingy, you know. Its fun. We are still not through thinking of teleportation or the little hand held lazer gun. Ooops thanks for bringing us back to reality, Aerroes. If mother earth could sustain that long.
TSBeastboy
post May 18 2010, 04:38 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 03:53 PM)
the critical site like missile silo, power station, water supply, are not networked together and not connected to the internet. all the site use hard to use industrial control software that are no compatible with each other. it is possible for hacker of really advanced AI in the future to gain access to one facility at an time, but to gain access to all of them at the same time like the scenario in die hard 4, you would need to be physically onsite to take control of these control system, maybe the hacker can get lucky with some site that dont have proper network security where a pc connected to the internal network also connected to the internet, they can use this PC to gain access. if there is no access no matter how advance the AI it will not be able to hack the system.
*
Actually public utilties do use the internet... using VPN to send telemetry, billing information and so on, especially when their plants are distributed. They use a public network like the internet for cost reason (the cost to lay your own fiber station to station, branch to branch is crazy) & the belief is that VPN technology is secure enough to stay private. But security and encryption is a never ending battle.

Weapon systems... most mobile launch systems, ships & submarines use encrypted RF or microwave. Again, the moment you broadcast a signal in the open, you open a window to intrusion. The security strength is not much different than VPN... somtimes its only as good, or as bad, as the password the operator use. Give a password hammer software enuf time and they can break in.

Proprietary systems as a wall, yes this can work but we must remember, these systems are rarely islands. To shut down a proprietary bank system, you don't need to shut down their computer. You shut down the power station that supplies power to the computer. Unless the station is operated by the bank, it falls under public domain and vulnerable to the usual public risks. You don't even need AI to break in.

Power stations are the most vulnerable because one small failure can lead to a nationwide cascade failure, like what Malaysia suffered in 1996.

This hidden interconnectedness between public and private domains is probably what caused the US DOD to issue their warning.


robertngo
post May 18 2010, 05:05 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 04:38 PM)
Actually public utilties do use the internet... using VPN to send telemetry, billing information and so on, especially when their plants are distributed. They use a public network like the internet for cost reason (the cost to lay your own fiber station to station, branch to branch is crazy) & the belief is that VPN technology is secure enough to stay private. But security and encryption is a never ending battle.

Weapon systems... most mobile launch systems, ships & submarines use encrypted RF or microwave. Again, the moment you broadcast a signal in the open, you open a window to intrusion. The security strength is not much different than VPN... somtimes its only as good, or as bad, as the password the operator use. Give a password hammer software enuf time and they can break in.

Proprietary systems as a wall, yes this can work but we must remember, these systems are rarely islands. To shut down a proprietary bank system, you don't need to shut down their computer. You shut down the power station that supplies power to the computer. Unless the station is operated by the bank, it falls under public domain and vulnerable to the usual public risks. You don't even need AI to break in.

Power stations are the most vulnerable because one small failure can lead to a nationwide cascade failure, like what Malaysia suffered in 1996.

This hidden interconnectedness between public and private domains is probably what caused the US DOD to issue their warning.
*
the biling information are not connected to the control system that are running the plant and telemetry to the outside world should be just an one way transfer.

as for missile launch there will network to the silo or remote launch unit but the launch system are much more securely built than other system. security study believe that hacker will need to trick the personal in charge to launch the nuke by sending false info to the monitoring system so the person in charge will launch the nuke in panic. very unlikely they can take control of the launch system itself

financial institution all have regulation that require them to have disaster recovery site, bank negara require that system be able to switch to DR in a few minutes time and every year they run drill to confirm the DR procedure is working.

the biggest risk to any organization are people, even if the person does not mean to do harm to the system. he could just be an bored operator who connect his pc to the internet, this will get hacker the weak link to break in.

This post has been edited by robertngo: May 18 2010, 05:08 PM
TSBeastboy
post May 18 2010, 05:40 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 05:05 PM)
the biggest risk to any organization are people, even if the person does not mean to do harm to the system. he could just be an bored operator who connect his pc to the internet, this will get hacker the weak link to break in.
*
Yes I totally agree with you on that one becoz there are still people who keep their PIN number together with the ATM card in ther wallet. laugh.gif

Ok lets say I agree that power stations, banks, and military applications today are all hacker proof. Let me get back to the the main point of my question which is, should the developers of AI, robotics and software be subject to strict ethical rules about what they can and shouldn't develop? to prevent a terminator-style scenario from ever happening.

If it sounds far fetched, think of the restrictions they're already putting on cloning. Maybe people are afraid it might lead to Frankenstein so some countries actually impose restrictions on that science. If they can do that, wouldn't they eventually do the same to AI and robotics development too? And most importantly, do you think such restrictions would be justifiable?


robertngo
post May 18 2010, 06:56 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 05:40 PM)
Yes I totally agree with you on that one becoz there are still people who keep their PIN number together with the ATM card in ther wallet.  laugh.gif

Ok lets say I agree that power stations, banks, and military applications today are all hacker proof. Let me get back to the the main point of my question which is, should the developers of AI, robotics and software be subject to strict ethical rules about what they can and shouldn't develop? to prevent a terminator-style scenario from ever happening.

If it sounds far fetched, think of the restrictions they're already putting on cloning. Maybe people are afraid it might lead to Frankenstein so some countries actually impose restrictions on that science. If they can do that, wouldn't they eventually do the same to AI and robotics development too? And most importantly, do you think such restrictions would be justifiable?
*
it is not that the system is hacker proof it is just impossible for someone of an machine to have access to all of them or a large number of the system to destroy the world. it is hard enough to hack into just one.

i think if the computer in the future are advance enough to reach self awareness, there need to be a bill of right for the machine, if not they might get really piss off and start a war with human sweat.gif

there are not current machine in service that can decide to fire weapon on its own, there is always a person remotely controling it, giving autonomy to robot in battle are still a subject of debate, the technology is also still decades away from being battle ready, the worst thing you can have is the robot indentify your own troop as target and wipe them out

http://news.bbc.co.uk/2/hi/technology/8182003.stm
TSBeastboy
post May 18 2010, 08:38 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


The doomsday scenario can happen without guns or missiles. A war can be triggered by sabotaging the economy and public infrastructure. The intruder doesn't need simultaneous access to every computer to do this either. It just needs access to one machine, one weak point that can trip other systems and force exception routines to cascade the attack. Human chaos will do the rest.

On whether machines will eventually reach self awareness, I'd be interested to see how they define "self aware", see whether a home alarm using motion sensors can be classified as self aware. The question still stands though... should developers be allowed to go all the way and build intelligent systems without any ethical controls?

Thanks for posting the bbc link. Interesting article that reads like the beginnings of Skynet. biggrin.gif


robertngo
post May 18 2010, 11:28 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 08:38 PM)
The doomsday scenario can happen without guns or missiles. A war can be triggered by sabotaging the economy and public infrastructure. The intruder doesn't need simultaneous access to every computer to do this either. It just needs access to one machine, one weak point that can trip other systems and force exception routines to cascade the attack. Human chaos will do the rest.

On whether machines will eventually reach self awareness, I'd be interested to see how they define "self aware", see whether a home alarm using motion sensors can be classified as self aware. The question still stands though... should developers be allowed to go all the way and build intelligent systems without any ethical controls?

Thanks for posting the bbc link. Interesting article that reads like the beginnings of Skynet.  biggrin.gif
*
if and when machine reach self awareness it will be like another person capable of individual though, talking to it will let you convince you are talking to a real person, and it by all mean and purpose is a real person. it will learn ethic not by having it code into memory.


of course for now the machine we made to be semi autonomous will need to program in fail safe routine and manual override, i dont think they will put semi autonomous robot with weapon in to service any time soon, unless semi autonomous have been proven to work in other support role like logistic. success of robot like big dog will pave the way to the future, i for one welcome our robot overlord. laugh.gif

http://www.bostondynamics.com/robot_bigdog.html

user posted image

This post has been edited by robertngo: May 18 2010, 11:29 PM
cherroy
post May 19 2010, 12:53 AM

20k VIP Club
Group Icon
Staff
25,802 posts

Joined: Jan 2003
From: Penang


So far whatever or how high self-awareness of AI, it cannot beat human brain.

Because the self-awareness of AI is built based upon information received then process the information, and react to the information received based on the pre-set, programme or whatever AI built in, aka no matter how high flexibility of the AI and self-awareness, it cannot beat human factor of creativity and flexibility. After all, it is the human brain create the AI. biggrin.gif

Aka whatever AI is rigid based on programme and logarithms set, while human is not.
While human factor has creativity, can always have new constant input for self-improvement etc.


6 Pages  1 2 3 > » Top
 

Change to:
| Lo-Fi Version
0.0202sec    0.39    5 queries    GZIP Disabled
Time is now: 26th November 2025 - 01:52 PM