Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
TSBeastboy
post May 18 2010, 10:56 AM, updated 16y ago

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Do you think its possible for machines to control human life one day?

We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.

Isaac Asimov wrote the 3 laws of robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.

So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?

More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)


TSBeastboy
post May 18 2010, 02:44 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 11:16 AM)
even it is to happen it would be decades away from AI to be smarter that human. we are still one or two decades away from building a supercomputer that to completely simulate the human brain.
*
IMO, a machine does not need to simulate the human brain or be smarter than it to control humans. It just needs to be smart enough to take over missile launching systems, trip the power stations and shut off water supply.

Its very easy to deny humans control over such facilities. Just embed some code that changes people's passwords. When they're locked out, the machine will be self-operating until it runs out of juice... which can be decades if it draws power from nuclear sources.

A few months ago the US issued an internal security alert after discovering how easily their infrastructure can be crippled because of bad computer code... whether done intentionally or not. The scenario is easy to imagine from a programmer's point of view.

TSBeastboy
post May 18 2010, 04:38 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 03:53 PM)
the critical site like missile silo, power station, water supply, are not networked together and not connected to the internet. all the site use hard to use industrial control software that are no compatible with each other. it is possible for hacker of really advanced AI in the future to gain access to one facility at an time, but to gain access to all of them at the same time like the scenario in die hard 4, you would need to be physically onsite to take control of these control system, maybe the hacker can get lucky with some site that dont have proper network security where a pc connected to the internal network also connected to the internet, they can use this PC to gain access. if there is no access no matter how advance the AI it will not be able to hack the system.
*
Actually public utilties do use the internet... using VPN to send telemetry, billing information and so on, especially when their plants are distributed. They use a public network like the internet for cost reason (the cost to lay your own fiber station to station, branch to branch is crazy) & the belief is that VPN technology is secure enough to stay private. But security and encryption is a never ending battle.

Weapon systems... most mobile launch systems, ships & submarines use encrypted RF or microwave. Again, the moment you broadcast a signal in the open, you open a window to intrusion. The security strength is not much different than VPN... somtimes its only as good, or as bad, as the password the operator use. Give a password hammer software enuf time and they can break in.

Proprietary systems as a wall, yes this can work but we must remember, these systems are rarely islands. To shut down a proprietary bank system, you don't need to shut down their computer. You shut down the power station that supplies power to the computer. Unless the station is operated by the bank, it falls under public domain and vulnerable to the usual public risks. You don't even need AI to break in.

Power stations are the most vulnerable because one small failure can lead to a nationwide cascade failure, like what Malaysia suffered in 1996.

This hidden interconnectedness between public and private domains is probably what caused the US DOD to issue their warning.


TSBeastboy
post May 18 2010, 05:40 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 18 2010, 05:05 PM)
the biggest risk to any organization are people, even if the person does not mean to do harm to the system. he could just be an bored operator who connect his pc to the internet, this will get hacker the weak link to break in.
*
Yes I totally agree with you on that one becoz there are still people who keep their PIN number together with the ATM card in ther wallet. laugh.gif

Ok lets say I agree that power stations, banks, and military applications today are all hacker proof. Let me get back to the the main point of my question which is, should the developers of AI, robotics and software be subject to strict ethical rules about what they can and shouldn't develop? to prevent a terminator-style scenario from ever happening.

If it sounds far fetched, think of the restrictions they're already putting on cloning. Maybe people are afraid it might lead to Frankenstein so some countries actually impose restrictions on that science. If they can do that, wouldn't they eventually do the same to AI and robotics development too? And most importantly, do you think such restrictions would be justifiable?


TSBeastboy
post May 18 2010, 08:38 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


The doomsday scenario can happen without guns or missiles. A war can be triggered by sabotaging the economy and public infrastructure. The intruder doesn't need simultaneous access to every computer to do this either. It just needs access to one machine, one weak point that can trip other systems and force exception routines to cascade the attack. Human chaos will do the rest.

On whether machines will eventually reach self awareness, I'd be interested to see how they define "self aware", see whether a home alarm using motion sensors can be classified as self aware. The question still stands though... should developers be allowed to go all the way and build intelligent systems without any ethical controls?

Thanks for posting the bbc link. Interesting article that reads like the beginnings of Skynet. biggrin.gif


TSBeastboy
post May 20 2010, 04:52 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 20 2010, 09:15 AM)
a truly self aware machine will make decision by its own, scientist are actively working on robot that can learn to do stuff, not being programmed to do stuff so it can solve problem that it have not been programmed. so it could have happen when the machine learn of all the evil that have been done by human, an extermination is the only logical thing to do.  icon_question.gif
user posted image
*
Would it be accurate to assume that artificially-spawned self awareness is going to be the same as human self-awareness?

Humans are carbon based. Computers are silicon based. If left in the wild, can we assume that a silicon-based brain will develop consciousness the same way as a carbon-based brain, and adopt the same priorities in its existence?


TSBeastboy
post May 20 2010, 05:33 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(teongpeng @ May 20 2010, 04:57 PM)
But unless they have desires, there is very little to fear about AI world domination. Well unless its programmed to do just that.
*
In programming, I can define desire as a measurable gap that need to be filled. For example, if I'm a battery powered robot and if my power level drops to crtitical, I can be programmed to look for a power source to recharge my batteries. It is no different than the human desire for food. Both eating and recharging batteries serve the same function: to recharge energy.

Now if I am a self-learning, self aware machine, I still have these gaps that need to be filled, these "desires." If survival is my prime goal and if humans block all my sources of power, I may force through those blockades even if it means harming a human in the process. The big problem will come when energy sources dwindle and man & AI machines have to compete for the same resource.

This post has been edited by Beastboy: May 20 2010, 05:36 PM
TSBeastboy
post May 21 2010, 09:21 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(VMSmith @ May 21 2010, 04:45 AM)
Actually, I always thought we got to the top because we clubbed, speared, chainsaw-ed, and drilled our way up there.

Man's ability to abuse and mistreat Nature and Himself makes Him what He is today.
*
Himself rather than himself? Ooo it makes humans sound almost divine, lol. biggrin.gif

A bit off track here but I'll go with Agent Smith of the Matrix when he said humans are like viruses. We land on a spot, sapu everything until there's nothing left, then we move on to the next spot. We are like a disease. So if the runaway AI develops into the tree-hugging sort, we'll start looking like a disease fit to be wiped out.

But again, I go back to my original question that's still unanswered. While AI and robotics development is at its infancy, would you support the ethical limitations on the science the way they did for cloning?

QUOTE(teongpeng @ May 20 2010, 10:01 PM)
what u described is a need. And need differs from desire. A need is actually a weakness for it is something u cannot do without.

A desire on the other hand is something like  "Oohh OoooooH...i would like to have one of that kickass graphic processor fastenned into my belly".
Desire therefore is more dynamic and changes with situation. With desire also comes ambition...an ambitious machine....now thats something to fear.
*
Ok I get u, the difference between wants and needs. I think wants will appear when organisms are pushed to compete for limited resources, the way men flash their toys when trying to attract a mate. Question is, will self aware machines behave the same way when put under the same circumstances.


TSBeastboy
post May 21 2010, 10:52 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Um... who is the TS guy I keep seeing ppl refer to in their posts?


Added on May 21, 2010, 11:10 am
QUOTE(VMSmith @ May 21 2010, 09:46 AM)
Of course, is there not a spark of divinity in each of us? smile.gif
QUOTE(Beastboy)
But again, I go back to my original question that's still unanswered. While AI and robotics development is at its infancy, would you support the ethical limitations on the science the way they did for cloning?
Yes. Though I say it with a heavy heart.
*
Ah.. the first answer, thanks.

Personally I'll vote no. Since we've proven time and again that we cannot be trusted with our own future, it wouldn't matter who's in charge, a benevolent AI or some 3rd world dictator. Yes the smart android may not be that benevolent but at this point, if I had to choose between a rational-violent machine and an irrational-violent human, I'll take my chances with the machine. smile.gif


This post has been edited by Beastboy: May 21 2010, 12:21 PM
TSBeastboy
post May 21 2010, 06:24 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(VMSmith @ May 21 2010, 02:23 PM)
TS = Tread starter. Which I believe is you. smile.gif
*
No way! Hahaha laugh.gif


Added on May 21, 2010, 6:28 pm
QUOTE(cherroy @ May 21 2010, 04:50 PM)
I would say, please stop using terminator, movie as a source of discussion. This is not happening in reality.
*
Eh? Since nobody had ever seen an alien except in Aliens movie, would you also tell NASA to stop spending billions on the SETI program?

This post has been edited by Beastboy: May 21 2010, 06:30 PM
TSBeastboy
post May 22 2010, 06:48 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(cherroy @ May 22 2010, 02:45 PM)
No, it is not the message.

Just movie and reality is 2 different story.

In movie, you can have indestructible body in terminator, in reality is not.
In movie, someone fast enough can catch the bullet, in reality, it is not. Even mythbuster also show this myth before.
In movie, a person get shot, still can run as fast as 100M spring, fight with others, in reality, the pain is severely enough even standup also cannot.

Also in theory, many thing looks simple, want this want that,
in reality when come hand on, electric-mechanical time, or application, it could be hell a problem to solve.

Just like you want to design a robot that can walk on its own with 2 feet in an uneven ground or going up stepcase.
In theory, it is simple, just put gyroscope, sensor as input to compensate the uneven ground so that the robot can compensate the force/movement based on the uneven ground scan.
But in practical, it is hell of problem to solve when put up to work time.
It could be much difficult than to workout an airplace that can fly.
*
Yes, I know, and thanks for stating your stand. If you had read the original post title carefully, could you kindly point to me where did it say "Shall we discuss how the Terminator will kill us all, just like the movie?" It said, "Will the Terminator-style doomsday ever happen?" Do you know the difference? In English, style means, "In the manner of", not to be confused with "This is the real thing."

Now I notice you are a moderator and assume you have placed this prohibition officially as a moderator. If yes, then please delete this thread in its entirety and modify your TOS to include "You are not allowed to make reference to any movies that the moderators feel is not "real" in any threads in Lowyat.net." If you do decide to let this thread continue, then please withdraw your prohibition. Thanks.

TSBeastboy
post May 23 2010, 10:09 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Thanks for understanding. smile.gif
TSBeastboy
post May 24 2010, 02:42 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(faceless @ May 24 2010, 02:13 PM)
I still need people to convince me how AI can grow a self conciousness.
*
First we have to define what consciousness is because apparently the psychologists, neurologists, philosophers and computer scientists don't agree on the definition.

Consciousness in Latin means "1. having joint or common knowledge with another, privy to, cognizant of; 2. conscious to oneself; esp., conscious of guilt". Let's try to apply it to a machine.

Cognizance: By being equipped with sound and motion sensors, I could say that a home alarm is aware of sound and movement. It fits this criteria.

Conscious to oneself: I can interrogate the home alarm system via internet and ask, what's your status? The system does a self check and reports back that everything's normal. No different than A asking B "Are you all right?" and B looking at his arms and legs and saying "I'm fine."

Conscious of guilt: If a human is taught to distinguish between right & wrong, he will develop a basis for guilt. There is a prerequisite conditioning. Similarly I can also program do's and don'ts into a computer and with that prerequisite it too will be able to distinguish between a do and a don't in its actions.

So from a linguist's definition, a computer can be conscious.

Now how do you define self-conscious?

TSBeastboy
post May 25 2010, 03:04 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 25 2010, 02:20 PM)
what will cause them to have mind, is when we completely reverse engineer the brain, with every single neuron and their function replicated in an supercomputer. the brain are just a massive array of interconnected neuron, i dont see why if we have recreated the working of the 100 billion neuron and the way it process information in the human brain that it have not also recreated the human mind.
*
This is a bit off topic but the sperm whale, elephant and bottle-nosed dolphin have a larger brain mass than an adult human but their minds are not at par with ours in terms of thinking, language, etc. Does the number of neurons really determine the characteristic of the mind or is it independent?

TSBeastboy
post May 26 2010, 08:54 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


That sounds like a biocentric view of the universe. It is also the Buddhist view as I understand it.
TSBeastboy
post May 27 2010, 10:15 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Another view about brain vs mind.

No matter how you describe colour, a blind man cannot see the beauty of a painting. No matter how you describe sound, a deaf man cannot appreciate the beauty of music. "Reality" is shaped by one's biological sensory organs + the brain's signal processing capability, hence the notion that we perceive our universe biocentrically.

Our brains are like circuit boards and all boards generally work the same way. However we ascribe different values to colour and sound. Even twins raised in the same environment often have different favorite colours and favourite songs. In that sense, your world is not the same as my world.

How does 2 copies of the same circuit board (brain) ascribe different meanings to the same stimulant? Is it due to unseen micro variations on the boards, uneven sensitivities of sensory input, or something independent of the board altogether?



This post has been edited by Beastboy: May 27 2010, 10:16 AM
TSBeastboy
post May 31 2010, 11:26 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 31 2010, 11:19 AM)
the logical outcome is to control human so they cannot do harm to themself and others.
*
Sounds like why parents curfew their kids. Cannot go out after dinner ler... laugh.gif


 

Change to:
| Lo-Fi Version
0.0251sec    0.43    6 queries    GZIP Disabled
Time is now: 25th November 2025 - 07:36 PM