Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
robertngo
post May 18 2010, 11:16 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 10:56 AM)
Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)
*
even it is to happen it would be decades away from AI to be smarter that human. we are still one or two decades away from building a supercomputer that to completely simulate the human brain. even if the robot become self aware, it is still not likely they will all band together and able to gain control off all the computer system.

i think a more likely robot doomsday are self replicate nanobot got out of control and consume all matter on earth. the grey goo scenario.

robertngo
post May 18 2010, 11:31 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(slimey @ May 18 2010, 11:20 AM)
eventually there will be a point where AI is closely equal to human's....
by that time what would human do?
human can still increase the mental power....
or human can join them and be cyborg yeah...
or human can try to make sure that it does not happen....
*
some futurist believe that the next step of human evolution is to merge our brain with computer chip that enhance our mental capability.
robertngo
post May 18 2010, 03:53 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 02:44 PM)
IMO, a machine does not need to simulate the human brain or be smarter than it to control humans. It just needs to be smart enough to take over missile launching systems, trip the power stations and shut off water supply.

Its very easy to deny humans control over such facilities. Just embed some code that changes people's passwords. When they're locked out, the machine will be self-operating until it runs out of juice... which can be decades if it draws power from nuclear sources.

A few months ago the US issued an internal security alert after discovering how easily their infrastructure can be crippled because of bad computer code... whether done intentionally or not. The scenario is easy to imagine from a programmer's point of view.
*
the critical site like missile silo, power station, water supply, are not networked together and not connected to the internet. all the site use hard to use industrial control software that are no compatible with each other. it is possible for hacker of really advanced AI in the future to gain access to one facility at an time, but to gain access to all of them at the same time like the scenario in die hard 4, you would need to be physically onsite to take control of these control system, maybe the hacker can get lucky with some site that dont have proper network security where a pc connected to the internal network also connected to the internet, they can use this PC to gain access. if there is no access no matter how advance the AI it will not be able to hack the system.

robertngo
post May 18 2010, 05:05 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 04:38 PM)
Actually public utilties do use the internet... using VPN to send telemetry, billing information and so on, especially when their plants are distributed. They use a public network like the internet for cost reason (the cost to lay your own fiber station to station, branch to branch is crazy) & the belief is that VPN technology is secure enough to stay private. But security and encryption is a never ending battle.

Weapon systems... most mobile launch systems, ships & submarines use encrypted RF or microwave. Again, the moment you broadcast a signal in the open, you open a window to intrusion. The security strength is not much different than VPN... somtimes its only as good, or as bad, as the password the operator use. Give a password hammer software enuf time and they can break in.

Proprietary systems as a wall, yes this can work but we must remember, these systems are rarely islands. To shut down a proprietary bank system, you don't need to shut down their computer. You shut down the power station that supplies power to the computer. Unless the station is operated by the bank, it falls under public domain and vulnerable to the usual public risks. You don't even need AI to break in.

Power stations are the most vulnerable because one small failure can lead to a nationwide cascade failure, like what Malaysia suffered in 1996.

This hidden interconnectedness between public and private domains is probably what caused the US DOD to issue their warning.
*
the biling information are not connected to the control system that are running the plant and telemetry to the outside world should be just an one way transfer.

as for missile launch there will network to the silo or remote launch unit but the launch system are much more securely built than other system. security study believe that hacker will need to trick the personal in charge to launch the nuke by sending false info to the monitoring system so the person in charge will launch the nuke in panic. very unlikely they can take control of the launch system itself

financial institution all have regulation that require them to have disaster recovery site, bank negara require that system be able to switch to DR in a few minutes time and every year they run drill to confirm the DR procedure is working.

the biggest risk to any organization are people, even if the person does not mean to do harm to the system. he could just be an bored operator who connect his pc to the internet, this will get hacker the weak link to break in.

This post has been edited by robertngo: May 18 2010, 05:08 PM
robertngo
post May 18 2010, 06:56 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 05:40 PM)
Yes I totally agree with you on that one becoz there are still people who keep their PIN number together with the ATM card in ther wallet.  laugh.gif

Ok lets say I agree that power stations, banks, and military applications today are all hacker proof. Let me get back to the the main point of my question which is, should the developers of AI, robotics and software be subject to strict ethical rules about what they can and shouldn't develop? to prevent a terminator-style scenario from ever happening.

If it sounds far fetched, think of the restrictions they're already putting on cloning. Maybe people are afraid it might lead to Frankenstein so some countries actually impose restrictions on that science. If they can do that, wouldn't they eventually do the same to AI and robotics development too? And most importantly, do you think such restrictions would be justifiable?
*
it is not that the system is hacker proof it is just impossible for someone of an machine to have access to all of them or a large number of the system to destroy the world. it is hard enough to hack into just one.

i think if the computer in the future are advance enough to reach self awareness, there need to be a bill of right for the machine, if not they might get really piss off and start a war with human sweat.gif

there are not current machine in service that can decide to fire weapon on its own, there is always a person remotely controling it, giving autonomy to robot in battle are still a subject of debate, the technology is also still decades away from being battle ready, the worst thing you can have is the robot indentify your own troop as target and wipe them out

http://news.bbc.co.uk/2/hi/technology/8182003.stm
robertngo
post May 18 2010, 11:28 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 18 2010, 08:38 PM)
The doomsday scenario can happen without guns or missiles. A war can be triggered by sabotaging the economy and public infrastructure. The intruder doesn't need simultaneous access to every computer to do this either. It just needs access to one machine, one weak point that can trip other systems and force exception routines to cascade the attack. Human chaos will do the rest.

On whether machines will eventually reach self awareness, I'd be interested to see how they define "self aware", see whether a home alarm using motion sensors can be classified as self aware. The question still stands though... should developers be allowed to go all the way and build intelligent systems without any ethical controls?

Thanks for posting the bbc link. Interesting article that reads like the beginnings of Skynet.  biggrin.gif
*
if and when machine reach self awareness it will be like another person capable of individual though, talking to it will let you convince you are talking to a real person, and it by all mean and purpose is a real person. it will learn ethic not by having it code into memory.


of course for now the machine we made to be semi autonomous will need to program in fail safe routine and manual override, i dont think they will put semi autonomous robot with weapon in to service any time soon, unless semi autonomous have been proven to work in other support role like logistic. success of robot like big dog will pave the way to the future, i for one welcome our robot overlord. laugh.gif

http://www.bostondynamics.com/robot_bigdog.html

user posted image

This post has been edited by robertngo: May 18 2010, 11:29 PM
robertngo
post May 19 2010, 09:10 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(cherroy @ May 19 2010, 12:53 AM)
So far whatever or how high self-awareness of AI, it cannot beat human brain.

Because the self-awareness of AI is built based upon information received then process the information, and react to the information received based on the pre-set, programme or whatever AI built in, aka no matter how high flexibility of the AI and self-awareness, it cannot beat human factor of creativity and flexibility. After all, it is the human brain create the AI.  biggrin.gif

Aka whatever AI is rigid based on programme and logarithms set, while human is not.
While human factor has creativity, can always have new constant input for self-improvement etc.
*
if the machine became truely self aware it will respond to information with it own judgement not preset program, the massive challenge to replicate the biological function of the brain on non biological component, that would include creativity, the machine will find it own solution to problem. just hope that the problem does not include rterminating all the pesky human laugh.gif

http://www.consciousness.it/CAI/CAI.htm

This post has been edited by robertngo: May 19 2010, 09:10 AM
robertngo
post May 20 2010, 09:15 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(teongpeng @ May 20 2010, 02:13 AM)
its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
*
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do. icon_question.gif
user posted image


This post has been edited by robertngo: May 20 2010, 09:16 AM
robertngo
post May 20 2010, 07:31 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 20 2010, 04:52 PM)
Would it be accurate to assume that artificially-spawned self awareness is going to be the same as human self-awareness?

Humans are carbon based. Computers are silicon based. If left in the wild, can we assume that a silicon-based brain will develop consciousness the same way as a carbon-based brain, and adopt the same priorities in its existence?
*
no one can know until it is developed. but scientist always model intelligent with human intelligent, if there was to be a day that machine reach sel aware i believe i will most likely be model on human brain function.
robertngo
post May 20 2010, 10:36 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Darkripper @ May 20 2010, 10:16 PM)
even if it happen... what about we launch EMP on them?
*
electronic can be shielded from EMP, military hardware are built to spec that have protection from EMP, if the robot in this case are made by the military it is most likely already been harden again EMP.
robertngo
post May 21 2010, 09:09 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(VMSmith @ May 21 2010, 04:31 AM)
I wish I could be that optimistic. The US Army has already deployed remote-controlled drones to blow up terrorists/civilians in Iraq and Afghanistan. It'll probably be just a matter of decades until the remote control is made redundant.
*
even if there autonomous robot is developed and proven effective, i dont think the army would be letting it to decide on kill target on it own, there will always still be remote control by a operator.


Added on May 21, 2010, 9:20 am
QUOTE(Darkripper @ May 21 2010, 03:03 AM)
I thought EMP can break any disable most electric circuit for a period of time... btw, this mean emp nowadays useless?
*
it will disable the civilian electronic, but military hardware are not likely to be harm if already harden. the US military EMP protection have drop in the years since the cold war. the effect of EMP is well know and equipment can be easily repaired if there are spare part available.

This post has been edited by robertngo: May 21 2010, 09:20 AM
robertngo
post May 21 2010, 09:22 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Frostlord @ May 21 2010, 03:12 AM)
@TS, i think your question is a bit too broad as there is no dateline

in 100000 years to come, anything could happen, unless you are suggesting that humans destroy ourselves (or alien invasion) before we are destroyed by AI

well, for AI destruction, it is quite a long way to go. as we can see from our current tech, we have nothing that is even 10% terminator (Asimo is like 0.000001% terminator biggrin.gif)
for alien invasion, it could happen anytime. heck, it could even happen tomorrow. This is because there is no measurement that aliens will not reach our planet soon (a few decades?) but we can be sure that in a few decades, terminator will not exists yet
*
in 2050, computer will have the processing power of the brain, so it is coming in just a few decade.
robertngo
post May 21 2010, 09:43 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(celicaizpower @ May 21 2010, 09:36 AM)
Dear TS,

Above all the things you said about the 3 rules, the first basic thing is to make the machine understand what we are typing for them.

So far until now, no such machine exist. unless I am unaware about it.
*
all machine now do exactly what they are told and only what they are told, the next step is to have machine that can learn to do things on their own.
robertngo
post May 21 2010, 10:02 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Frostlord @ May 21 2010, 09:48 AM)
i think he meant we humans can reproduce unlimited times. but for robots, as robots are made up of materials, there is a limited amount of that material on earth. therefore, there will be 1 day there wont be enough material to make robots anymore. just like our oil in a few years to come.
source? if no, tell me this back in 2050
*
checkout blue brain project, they expect to be able to simulate the brain in 10 years time, by 2050, we may be able to upload our brain and live forever.

http://en.wikipedia.org/wiki/Blue_Brain_Project

http://www.kurzweilai.net/articles/art0157.html?printable=1

an out of control self replicating nanobot could consume the entire earth in an grey goo senario.

This post has been edited by robertngo: May 21 2010, 10:04 AM
robertngo
post May 21 2010, 11:43 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(VMSmith @ May 21 2010, 02:30 AM)
QUOTE(Battlestar Galactica)
The Cylons were created by Man.

They rebelled.

They evolved.

There are many copies.

And they have a plan.
Bring on the hot cylon babes. Just bring it. smile.gif
*
i want number 8 wub.gif

user posted image


but of course it is better to have number 3 and number 6 join in the fun wub.gif drool.gif

user posted image

the cylon in the new BSG bring up a interesting scenaria, what if the machine does not rebel due to desire to dominate, but because of religious different with human and want to correct the flaws of human.
robertngo
post May 24 2010, 03:19 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 24 2010, 02:57 PM)
I see it like Rene Decartes no matter how bias this point may be. It is not just thinking therefore exisitng. It is about the mind versus the brain. Given the prime directive I can choose to freely deviate from it because I have a mind of my own (isn't this what humans in movie like to say when the machine goes haywire). As someone pointed out, we dont even understand the complexities of our mind, you expect the computer (who gets input from us) to know.
*
even if you dont fully understand something, does not mean you cannot make one, farmer never understand how plant user nutrien and sun light, but they grow plant for thousand of year before science finally understand how the plant work.

IBM is working on recreating the entire function of human brain in ten years time, their have already simulated a cat brain which is an improvement over the previous rat brain simulation. and a cell by cell recreation of the visual cortex have been completed. i believe that we will have gain a large amount of knowledge of the human brain from doing this research.
user posted image

user posted image



http://www.sciencedaily.com/releases/2009/...91118133535.htm

http://www.popularmechanics.com/technology...achines/4337190

http://www.sciencedaily.com/releases/2010/...00414184218.htm

the project is funded by DARPA which want to create a artificial brain can operate with low power like the human brain that use only 20 watt, and when DARPA is involve, the potential of artificial brain killing machine is certainly there.

This post has been edited by robertngo: May 24 2010, 03:27 PM
robertngo
post May 24 2010, 07:49 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 24 2010, 03:26 PM)
You assume the mind and the brain is one, Robert. we can go through the same thing of mind versus brain as in one previous thread. Monkeys had brains too. They do not have the mind.
*
what is the different between the mind and brain function?
robertngo
post May 25 2010, 11:32 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(nice.rider @ May 25 2010, 12:21 AM)
Mind is non physical and non material, it is thought. Thought is not located in space, and occupies a private universe of it own. E.g. yours mind belongs to you, his mind belongs to his. We can not tap into other people's mind.

Brain is a physical organ located in space. E.g the part of brain that controls the optics will process the signal arrives from the retina. Technically the entire processes of optical behavior could be studied in reductionism science.

This is what we called when physical world meets mental world. To study AI, scientists need to understand if matter acts on mind or mind acts on matter? Also, to study AI, need to understand determinism (algorithm based control) or free will (how could the machine make decision of it owns).

Let me branch out a bit. One question, how do you know your neighbour John has a mind? Is it because you have a mind and he behaves like you, by using deduction, you make a conclusion he has a mind too?

This deduction is actually an act of faith. Why, because you could never ever experience his consciousness, if you could, then that person is no longer him, he is you......So how could you conclude that he has a mind? It appears that everyone makes assumption that they have mind and also have faith to assume that others have mind too.

Now, how can we deduce that a machine (with AI capability) has a mind??

At the end of the day, sciences is just a prime mover for us to explain the universe, no matter how far and how well our sciences and technology advancement,  a lot of the big questions would still need to rely on philosophy and potentially metaphysics.

I think, therefore, I am - Rene Descartes
*
there is no really reason to believe that if we are able to simulate the complete working state of a brain, that the mind will not be simulated as well.
QUOTE
The human brain contains about 100 billion nerve cells called neurons, each individually linked to other neurons by way of connectors called axons and dendrites. Signals at the junctures (synapses) of these connections are transmitted by the release and detection of chemicals known as neurotransmitters. The established neuroscientific consensus is that the human mind is largely an emergent property of the information processing of this neural network.
Importantly, many leading neuroscientists have stated they believe important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
"Consciousness is part of the natural world. It depends, we believe, only on mathematics and logic and on the imperfectly known laws of physics, chemistry, and biology; it does not arise from some magical or otherworldly quality."[5]




user posted image

robertngo
post May 25 2010, 02:20 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 25 2010, 11:36 AM)
Wow, NiceRider must be a philosophy graduate. Thanks for the explaination. It was short and sweet.

Decartes was bias in the sense that animals do not possess the mind. They have brains to allow them to response to instinct. In the case of computers the have a set of rules and guidelines to replicate human intelligence. They dont have a mind of it own yet. Back to the question of what causes them to have one. Dont tell me a lighting surge will cause it as Cherroy stressed dont quote from movies as if they are the authority.
*
what will cause them to have mind, is when we completely reverse engineer the brain, with every single neuron and their function replicated in an supercomputer. the brain are just a massive array of interconnected neuron, i dont see why if we have recreated the working of the 100 billion neuron and the way it process information in the human brain that it have not also recreated the human mind.
robertngo
post May 25 2010, 03:35 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 25 2010, 03:04 PM)
This is a bit off topic but the sperm whale, elephant and bottle-nosed dolphin have a larger brain mass than an adult human but their minds are not at par with ours in terms of thinking, language, etc. Does the number of neurons really determine the characteristic of the mind or is it independent?
*
whale's brain to body mass ratio is not that impressive, and there is studies that found they are 98.2 billion non-neuronal cells in Minke whale neocortex. it is generally agreed that the growth of the neocortex, both absolutely and relative to the rest of the brain, during human evolution, has been responsible for the evolution of intelligence. the neocortex of whales and dolphin are not as developed as human.


Added on May 25, 2010, 3:39 pm
QUOTE(faceless @ May 25 2010, 03:25 PM)
It is not off topic Beastboy. The computer must have a mind of it own for it go against the prime directive. Robert sees the mind and the brain as one and the same thing. Philosophy scholars see them as separate. Animals mate whenever they are on heat. The brains response by instinct to seek gratification. Choice of mate is irrelevant. Robert, I am sure you will not just do it with anyone when your feel horny. Unlike the animal, it is your mind that tell you to look for your wife to do it.
*
90% of bird are monogamous while only 7% of mammals are, does this mean bird have mind and mammals dont?

it is not me that think brain and mind are the same thing, it is the current consensus among neuro scientist that the mind is the result of information processing in the network of neuron and things that you attributed to in the mind are just electrochemical process inside you brain. a very complex process but not a supernatural process.

This post has been edited by robertngo: May 25 2010, 03:45 PM

2 Pages  1 2 >Top
 

Change to:
| Lo-Fi Version
0.0239sec    0.64    6 queries    GZIP Disabled
Time is now: 26th November 2025 - 11:37 AM