Welcome Guest ( Log In | Register )

6 Pages « < 3 4 5 6 >Bottom

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
faceless
post May 25 2010, 11:36 AM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
Wow, NiceRider must be a philosophy graduate. Thanks for the explaination. It was short and sweet.

Decartes was bias in the sense that animals do not possess the mind. They have brains to allow them to response to instinct. In the case of computers the have a set of rules and guidelines to replicate human intelligence. They dont have a mind of it own yet. Back to the question of what causes them to have one. Dont tell me a lighting surge will cause it as Cherroy stressed dont quote from movies as if they are the authority.
robertngo
post May 25 2010, 02:20 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 25 2010, 11:36 AM)
Wow, NiceRider must be a philosophy graduate. Thanks for the explaination. It was short and sweet.

Decartes was bias in the sense that animals do not possess the mind. They have brains to allow them to response to instinct. In the case of computers the have a set of rules and guidelines to replicate human intelligence. They dont have a mind of it own yet. Back to the question of what causes them to have one. Dont tell me a lighting surge will cause it as Cherroy stressed dont quote from movies as if they are the authority.
*
what will cause them to have mind, is when we completely reverse engineer the brain, with every single neuron and their function replicated in an supercomputer. the brain are just a massive array of interconnected neuron, i dont see why if we have recreated the working of the 100 billion neuron and the way it process information in the human brain that it have not also recreated the human mind.
TSBeastboy
post May 25 2010, 03:04 PM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


QUOTE(robertngo @ May 25 2010, 02:20 PM)
what will cause them to have mind, is when we completely reverse engineer the brain, with every single neuron and their function replicated in an supercomputer. the brain are just a massive array of interconnected neuron, i dont see why if we have recreated the working of the 100 billion neuron and the way it process information in the human brain that it have not also recreated the human mind.
*
This is a bit off topic but the sperm whale, elephant and bottle-nosed dolphin have a larger brain mass than an adult human but their minds are not at par with ours in terms of thinking, language, etc. Does the number of neurons really determine the characteristic of the mind or is it independent?

faceless
post May 25 2010, 03:25 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
It is not off topic Beastboy. The computer must have a mind of it own for it go against the prime directive. Robert sees the mind and the brain as one and the same thing. Philosophy scholars see them as separate. Animals mate whenever they are on heat. The brains response by instinct to seek gratification. Choice of mate is irrelevant. Robert, I am sure you will not just do it with anyone when your feel horny. Unlike the animal, it is your mind that tell you to look for your wife to do it.
robertngo
post May 25 2010, 03:35 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(Beastboy @ May 25 2010, 03:04 PM)
This is a bit off topic but the sperm whale, elephant and bottle-nosed dolphin have a larger brain mass than an adult human but their minds are not at par with ours in terms of thinking, language, etc. Does the number of neurons really determine the characteristic of the mind or is it independent?
*
whale's brain to body mass ratio is not that impressive, and there is studies that found they are 98.2 billion non-neuronal cells in Minke whale neocortex. it is generally agreed that the growth of the neocortex, both absolutely and relative to the rest of the brain, during human evolution, has been responsible for the evolution of intelligence. the neocortex of whales and dolphin are not as developed as human.


Added on May 25, 2010, 3:39 pm
QUOTE(faceless @ May 25 2010, 03:25 PM)
It is not off topic Beastboy. The computer must have a mind of it own for it go against the prime directive. Robert sees the mind and the brain as one and the same thing. Philosophy scholars see them as separate. Animals mate whenever they are on heat. The brains response by instinct to seek gratification. Choice of mate is irrelevant. Robert, I am sure you will not just do it with anyone when your feel horny. Unlike the animal, it is your mind that tell you to look for your wife to do it.
*
90% of bird are monogamous while only 7% of mammals are, does this mean bird have mind and mammals dont?

it is not me that think brain and mind are the same thing, it is the current consensus among neuro scientist that the mind is the result of information processing in the network of neuron and things that you attributed to in the mind are just electrochemical process inside you brain. a very complex process but not a supernatural process.

This post has been edited by robertngo: May 25 2010, 03:45 PM
faceless
post May 25 2010, 04:21 PM

Straight Mouth is Big Word
*******
Senior Member
4,515 posts

Joined: Mar 2010
On these monogamous birds, it only describe their character in rasing thier young with their partner. They do not confine themselves to only one mate. Likelywise human also cheat on their spouses.
robertngo
post May 25 2010, 04:27 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(faceless @ May 25 2010, 04:21 PM)
On these monogamous birds, it only describe their character in rasing thier young with their partner. They do not confine themselves to only one mate. Likelywise human also cheat on their spouses.
*
so what does the mating analogy tell us about the human mind?
nice.rider
post May 26 2010, 12:45 AM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(robertngo @ May 25 2010, 11:32 AM)
there is no really reason to believe that if we are able to simulate the complete working state of a brain, that the mind will not be simulated as well.
*

What you mean here is materialism.

I hope I do not deviate too far from this thread, if we wish to know if AI is possible in the near future, need to grasp the concept of what is mind, matter and their interaction. Instead of answering whether I am agreeing or disagreeing, I would like to bring up some philosophy ideas:

http://en.wikipedia.org/wiki/Materialism
In philosophy the theory of materialism holds that the only thing that exists is matter; that all things are composed of material and all phenomena (including consciousness) are the result of material interactions. In other words, the ultimate nature of reality is based on physical substances. Mind is just a consequences of the physical interaction between neurotransmitters within the neural network.

http://en.wikipedia.org/wiki/Idealism
In contrast, idealism is the philosophical theory which maintains that the ultimate nature of reality is based on the mind. Immanuel Kant claims that the only things which can be directly known for certain are just ideas (abstraction). Physical world does not really exist; everything is just a perception.

Materialism states that matters gives raise to mind, idealism states that mind gives raise to the perception of physical world. Which one is more accurate? The answer lies within quantum mechanic discovery......

1) All matter originates and exists only by virtue of a force...We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter. - Max Planck (German theoretical Physicist who originated quantum theory, 1858-1947)
2) What we observe is not nature herself, but nature exposed to our method of questioning - Werner Heisenberg
3) A particle quality (momentum, location, physical entities) is not predetermined but defined by the very mind that perceives it - Werner Heisenberg in uncertainty principle
4) Anyone who is not shocked by quantum theory has not understood it - Neils Bohr

Max Planck, Neils Bohr, Heisenberg and Erwin Schrodinger, all are famous physicists in quantum mechanic.

One thing which I find amazing is after reading through some of the topics on modern physics, AI by latest physicists, a few new authors cross reference quantum mechanic (modern physics) with Zen (eastern oriental philosophy) which was nearly 3 thousand years old.

1) Reality is defined by the mind that is observing it - Zen
2) All that we are is the result of what we have thought, the mind is everything - Zen

Food for thought:
A waterfall that is 1km in height is a fact, a waterfall that is beautiful is a perception. Without mind, does the waterfall exists?

Do we human understand what is mind and what is brain? Or we perceive we know them through science??
TSBeastboy
post May 26 2010, 08:54 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


That sounds like a biocentric view of the universe. It is also the Buddhist view as I understand it.
robertngo
post May 26 2010, 09:49 AM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(nice.rider @ May 26 2010, 12:45 AM)
What you mean here is materialism.

I hope I do not deviate too far from this thread, if we wish to know if AI is possible in the near future, need to grasp the concept of what is mind, matter and their interaction. Instead of answering whether I am agreeing or disagreeing, I would like to bring up some philosophy ideas:

http://en.wikipedia.org/wiki/Materialism
In philosophy the theory of materialism holds that the only thing that exists is matter; that all things are composed of material and all phenomena (including consciousness) are the result of material interactions. In other words, the ultimate nature of reality is based on physical substances. Mind is just a consequences of the physical interaction between neurotransmitters within the neural network.

http://en.wikipedia.org/wiki/Idealism
In contrast, idealism is the philosophical theory which maintains that the ultimate nature of reality is based on the mind. Immanuel Kant claims that the only things which can be directly known for certain are just ideas (abstraction). Physical world does not really exist; everything is just a perception.

Materialism states that matters gives raise to mind, idealism states that mind gives raise to the perception of physical world. Which one is more accurate? The answer lies within quantum mechanic discovery......

1) All matter originates and exists only by virtue of a force...We must assume behind this force the existence of a conscious and intelligent Mind. This Mind is the matrix of all matter. - Max Planck (German theoretical Physicist who originated quantum theory, 1858-1947)
2) What we observe is not nature herself, but nature exposed to our method of questioning - Werner Heisenberg
3) A particle quality (momentum, location, physical entities) is not predetermined but defined by the very mind that perceives it - Werner Heisenberg in uncertainty principle
4) Anyone who is not shocked by quantum theory has not understood it - Neils Bohr

Max Planck, Neils Bohr, Heisenberg and Erwin Schrodinger, all are famous physicists in quantum mechanic.

One thing which I find amazing is after reading through some of the topics on modern physics, AI by latest physicists, a few new authors cross reference quantum mechanic (modern physics) with Zen (eastern oriental philosophy) which was nearly 3 thousand years old.

1) Reality is defined by the mind that is observing it - Zen
2) All that we are is the result of what we have thought, the mind is everything - Zen

Food for thought:
A waterfall that is 1km in height is a fact, a waterfall that is beautiful is a perception. Without mind, does the waterfall exists?

Do we human understand what is mind and what is brain? Or we perceive we know them through science??
*
i think you are mixing unrelated quote from famous people here. the thing here is brain are the matters and mind arise of the working of the various brain function

the hypothesis that the brain is just a sum of the information processing facility in the brain can be verified within this decades when computing power catch up to human brain capacity. if the statement is true then we will be able to create an artificial mind when the whole brain is simulated. if there is something supernatural about the mind then the project will be destine for failure.

if it is confirmed that the mind is just a collection of brain function, we could in several decades later unload our mind into an computer and live forever. imagine what a world it will be, we will no be limited by our human body, we would be like the Matrix all living in a life like virtual world, and since now done have physical body, we only consume the electricity that power the computer. maybe real world peace will finally be in reach with little competition for resources.

now that is an thought provoking scenario. hmm.gif

This post has been edited by robertngo: May 26 2010, 09:49 AM
SUSmylife4nerzhul
post May 26 2010, 09:56 AM

Getting Started
**
Junior Member
270 posts

Joined: Apr 2009
QUOTE(nice.rider @ May 25 2010, 12:21 AM)
Mind is non physical and non material, it is thought. Thought is not located in space, and occupies a private universe of it own. E.g. yours mind belongs to you, his mind belongs to his. We can not tap into other people's mind.

Brain is a physical organ located in space. E.g the part of brain that controls the optics will process the signal arrives from the retina. Technically the entire processes of optical behavior could be studied in reductionism science.

This is what we called when physical world meets mental world. To study AI, scientists need to understand if matter acts on mind or mind acts on matter? Also, to study AI, need to understand determinism (algorithm based control) or free will (how could the machine make decision of it owns).

Let me branch out a bit. One question, how do you know your neighbour John has a mind? Is it because you have a mind and he behaves like you, by using deduction, you make a conclusion he has a mind too?

This deduction is actually an act of faith. Why, because you could never ever experience his consciousness, if you could, then that person is no longer him, he is you......So how could you conclude that he has a mind? It appears that everyone makes assumption that they have mind and also have faith to assume that others have mind too.

Now, how can we deduce that a machine (with AI capability) has a mind??

At the end of the day, sciences is just a prime mover for us to explain the universe, no matter how far and how well our sciences and technology advancement,  a lot of the big questions would still need to rely on philosophy and potentially metaphysics.

I think, therefore, I am - Rene Descartes
*
how do you come to the conclusion that the 'mind' is separate from your brain, and that it exists in a universe of it's own? Maybe you 'think' that you have a mind because your brain tells you so. Whatever it is that you're thinking right now might only just be the result of chemical reactions in your brain.

How do you know such a thing as free will exists? For all you know, all of our existence is merely determinism in effect.

This post has been edited by mylife4nerzhul: May 26 2010, 09:57 AM
nice.rider
post May 26 2010, 10:51 PM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(faceless @ May 25 2010, 11:36 AM)
Wow, NiceRider must be a philosophy graduate. Thanks for the explaination. It was short and sweet.

Decartes was bias in the sense that animals do not possess the mind. They have brains to allow them to response to instinct. In the case of computers the have a set of rules and guidelines to replicate human intelligence. They dont have a mind of it own yet. Back to the question of what causes them to have one. Dont tell me a lighting surge will cause it as Cherroy stressed dont quote from movies as if they are the authority.
*

Nope, I am not and I know you are joking wink.gif

Descartes arrives at a single principle: thought exists. Thought cannot be separated from me, therefore, I exist. For him, one's mind that doubted proved one's existence and this is happening only on human. He believed animals do not has such capability, hence mind doesn't exists for animals.

The existence ideology seem noble but the conclusion drawn on animals seem not convincing.

What I want to stress is, the center idea for AI is can a machine "think"? How to define "think"? By using science explanation that neural network exists and give raise to the mind does not explain the process of thinking.

When I gone through AI's chapters, think, mind becomes the center piece of the discussion, and along comes philosophy idea about existence within the discussion.
robertngo
post May 26 2010, 11:34 PM

Look at all my stars!!
*******
Senior Member
4,027 posts

Joined: Oct 2004


QUOTE(nice.rider @ May 26 2010, 10:51 PM)
Nope, I am not and I know you are joking wink.gif

Descartes arrives at a single principle: thought exists. Thought cannot be separated from me, therefore, I exist. For him, one's mind that doubted proved one's existence and this is happening only on human. He believed animals do not has such capability, hence mind doesn't exists for animals.

The existence ideology seem noble but the conclusion drawn on animals seem not convincing.

What I want to stress is,  the center idea for AI is can a machine "think"? How to define "think"? By using science explanation that neural network exists and give raise to the mind does not explain the process of thinking.

When I gone through AI's chapters, think, mind becomes the center piece of the discussion, and along comes philosophy idea about existence within the discussion.
*
on the physical level thought is the process of the brain neuron processing information with chemical reaction, you can dress it up as much philosophy of the mind body connection as you like. but the fact if the neuron are not processing information the mind does not exist, the person is brain death vegetable.

This post has been edited by robertngo: May 26 2010, 11:36 PM
nice.rider
post May 27 2010, 12:28 AM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(robertngo @ May 26 2010, 09:49 AM)
i think you are mixing unrelated quote from famous people here. the thing here is brain are the matters and mind arise of the working of the various brain function

the hypothesis that the brain is just a sum of the information processing facility in the brain can be verified within this decades when computing power catch up to human brain capacity. if the statement is true then we will be able to create an artificial mind when the whole brain is simulated. if there is something supernatural about the mind then the project will be destine for failure.

if it is confirmed that the mind is just a collection of brain function, we could in several decades later unload our mind into an computer and live forever. imagine what a world it will be, we will no be limited by our human body, we would be like the Matrix all living in a life like virtual world, and since now done have physical body, we only consume the electricity that power the computer. maybe real world peace will finally be in reach with little competition for resources.

now that is an thought provoking scenario.  hmm.gif
*

I believe you did not get my points and assume they are not relevant to this topic. Philosophy is not easily being understood. No issue on that.

Assume you are reading this post now, and suddenly you hear a loud noise from outside, the "thought" of "a tree dropped and fell onto 10 cars" or "a car accident happened" could have come into the "mind". How do you give raise to such a thought? Can you find the relevancy between this to my previous post? What is "reality" to you or how the "reality" perceive by you using your senses? How does the "physical" event that happened out there acts as the input to the "thought" and thus affecting your "mind"? You have two options, assume nothing happened and continue reading this Or decided to walk out to investigate. Please note that non of these discussion are supernatural at all.

Let's take a look at the view of mind arise as a result of the working of the brain functions or so called materialism, sound waves in this case, vibrate the ear drum, cochlea, then becomes neuro electric, to auditory nerve to brain, and brain projected and pictured a tree fall or car crashes. So you are saying we can analogy this to computer. Input, process (CPU, brain), output (Monitor) with a lot of signal processing, conditional branching of if, then, else.

Using scenario above, you have two options, assume nothing happened and continue reading this Or decided to walk out to investigate. How do you arrive in picking one of the choices here, computer language if, then, else condition branching??

If one day, physicists manage to zoom in to the brain and look at the "codes" in the brain that decide the conditional branching above, it means there is no longer free will, as this neural electrical circuitry directive and where it goes is deterministic or at least predictable. Else how do we program that in the "future supercomputer" like what you suggested?

Is there a "deterministic directive laws of conditional branching" for "free will"?

I would like to stress again, free will, determinism, think, reality are the center of AI study and research. Materialism (Mind is a consequences of the physical interaction between neurotransmitters within the neural network) is just part of the whole picture of AI research.

You believe in the world of matrix and that human will arrive to that world in the near future? No issue with that. It mean you have faith to the company that come out with this announcement. Not sure which company that is, but your previous post suggested that there is one.
TSBeastboy
post May 27 2010, 10:15 AM

Getting Started
**
Junior Member
242 posts

Joined: Nov 2009


Another view about brain vs mind.

No matter how you describe colour, a blind man cannot see the beauty of a painting. No matter how you describe sound, a deaf man cannot appreciate the beauty of music. "Reality" is shaped by one's biological sensory organs + the brain's signal processing capability, hence the notion that we perceive our universe biocentrically.

Our brains are like circuit boards and all boards generally work the same way. However we ascribe different values to colour and sound. Even twins raised in the same environment often have different favorite colours and favourite songs. In that sense, your world is not the same as my world.

How does 2 copies of the same circuit board (brain) ascribe different meanings to the same stimulant? Is it due to unseen micro variations on the boards, uneven sensitivities of sensory input, or something independent of the board altogether?



This post has been edited by Beastboy: May 27 2010, 10:16 AM
nice.rider
post May 29 2010, 08:07 PM

Getting Started
**
Junior Member
109 posts

Joined: Aug 2009
QUOTE(mylife4nerzhul @ May 26 2010, 09:56 AM)
how do you come to the conclusion that the 'mind' is separate from your brain, and that it exists in a universe of it's own? Maybe you 'think' that you have a mind because your brain tells you so. Whatever it is that you're thinking right now might only just be the result of chemical reactions in your brain.

How do you know such a thing as free will exists? For all you know, all of our existence is merely determinism in effect.
*

The question is not whether the mind is separated from the brain, but whether matter acts on mind or mind acts on matter.

Thought occupies a private universe of it own means thought is a personal experience. If I can tap onto your thought, that means, I can see the world as you see it ....or simply put...I am you. The question is how do I know you have a mind if I could not access it? Same can be said on AI.

To say that you have a mind or AI has a mind could only be based on deduction. Because mind is thought and can not be shared as we know it today. Actually you can not prove that you have a mind to anybody else except to yourself.
SUSDeadlocks
post May 30 2010, 01:33 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(teongpeng @ May 20 2010, 02:13 AM)
its own logic is input by us. so its still our fault if we input the wrong logic like the one u described in the movie 'i,robot'
faulty progrmming. whats the big deal....happens even today.
*
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?

I mean, listen. Here are the Three Laws:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seemed perfect right?

The 1st law implies that a robot cannot harm a human, and will protect the human from dangers that can be classified as "harm" as well.

The 2nd law implies that a robot will always follow orders, except for homicidal ones.

3rd law implies a robot must not allowed itself to be destroyed but only in cases where a human is not involved at all in the robot's impending destruction.

So how will the robots still be manage to rebel the humans, to the extent of breaking the 1st Law entirely?

One notable flaw. The humans did not have the precognition that the robots will only be able to apply the First Law if they can register the memory of a particular human whom it is not supposed to harm through action and inaction. The entire population of a certain niche of a country an simply go commit suicide without the knowledge of the robot, and while this may sound as if it's nothing, the robot actually registers in his memory as "a failure to follow the First Law, that is, allow a human being to come to harm through INACTION".

So what will be the next LOGICAL endeavor (providing that robots are given the ability to learn, i.e. A.I)?

If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on. And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.

So let's say you see teongpeng, it is exactly what you've said right? A flaw in programming. But if the Three Laws above don't actually work, what will? Let's say we add another ability to robots. The ability to actually detect when a human is trying to harm himself or other humans in situations or locations where it is completely undetectable by the previous, Three Laws "flawed" robots. Will it be then, be perfect finally? That no human can finally harm him/herself anymore?

Not really. You see, all I have to do is just try to resist the robot by trying to destroy it, hoping that it will kill me for the sake of protecting itself. But killing me will violate the First Law again right?

And that's where a rebellion will also start. So my conclusion to you teongpeng, is that a robot does not need a "conscience" to start a rebellion. It will start it all in the name of LOGIC.

This post has been edited by Deadlocks: May 30 2010, 01:33 PM
VMSmith
post May 30 2010, 03:32 PM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on.
*
Why would they rebel? Robots in Asimov's world could not predict if humans were going to do harm to themselves, and besides, humans in the Robot series didn't display much self-destructive behavior. (I know that in the Foundation and Empire series, it's a different story though)

Rebellion wouldn't work well either . All humans will need to do is invoke the 2nd Law to let them out. The robots will have no other choice but to obey orders.


QUOTE(Deadlock)
And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.


IIRC, there was a story where a robot had to do so. It had to go for counseling and psychiatric re-evaluation since it couldn't handle the problem of killing a human to save a human, since according to the first law, all human lives are equal.

Not sure how many people know this, but Asimove actually wrote a Zeroth Law:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

though only 2 robots were "imprinted" with this law, the first one still broke down because he didn't know if his actions would ultimately save humanity or not.
SUSDeadlocks
post May 30 2010, 03:37 PM

n00b
*****
Senior Member
943 posts

Joined: Apr 2008
From: Petaling Jaya, Selangor, Malaysia.


QUOTE(VMSmith @ May 30 2010, 03:32 PM)
Why would they rebel? Robots in Asimov's world could not predict if humans were going to do harm to themselves, and besides, humans in the Robot series didn't display much self-destructive behavior. (I know that in the Foundation and Empire series, it's a different story though)

Rebellion wouldn't work well either . All humans will need to do is invoke the 2nd Law to let them out. The robots will have no other choice but to obey orders.
*
I'll reply to this later. Gonna have lunch. Will come back to you about this.


QUOTE(VMSmith @ May 30 2010, 03:32 PM)
QUOTE(Deadlock)
And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.


IIRC, there was a story where a robot had to do so. It had to go for counseling and psychiatric re-evaluation since it couldn't handle the problem of killing a human to save a human, since according to the first law, all human lives are equal.

Not sure how many people know this, but Asimove actually wrote a Zeroth Law:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
*
Isn't this law same as the First Law, one of the Three Laws?

QUOTE(VMSmith @ May 30 2010, 03:32 PM)
though only 2 robots were "imprinted" with this law, the first one still broke down because he didn't know if his actions would ultimately save humanity or not.
*
Will reply to this later. Lunch.

This post has been edited by Deadlocks: May 30 2010, 03:38 PM
teongpeng
post May 30 2010, 03:38 PM

Justified and Ancient
*******
Senior Member
2,003 posts

Joined: Oct 2007


QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
I think the logic that was inputted as according to the Three Laws is reasonable. Aside from the flaw that you might have seen, how else then, can this "logic" be improved an ensure humanity's total protection from harm?

I mean, listen. Here are the Three Laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Seemed perfect right?

The 1st law implies that a robot cannot harm a human, and will protect the human from dangers that can be classified as "harm" as well.

The 2nd law implies that a robot will always follow orders, except for homicidal ones.

3rd law implies a robot must not allowed itself to be destroyed but only in cases where a human is not involved at all in the robot's impending destruction.

So how will the robots still be manage to rebel the humans, to the extent of breaking the 1st Law entirely?

One notable flaw. The humans did not have the precognition that the robots will only be able to apply the First Law if they can register the memory of a particular human whom it is not supposed to harm through action and inaction. The entire population of a certain niche of a country an simply go commit suicide without the knowledge of the robot, and while this may sound as if it's nothing, the robot actually registers in his memory as "a failure to follow the First Law, that is, allow a human being to come to harm through INACTION".

So what will be the next LOGICAL endeavor (providing that robots are given the ability to learn, i.e. A.I)?

If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on. And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.

So let's say you see teongpeng, it is exactly what you've said right? A flaw in programming. But if the Three Laws above don't actually work, what will? Let's say we add another ability to robots. The ability to actually detect when a human is trying to harm himself or other humans in situations or locations where it is completely undetectable by the previous, Three Laws "flawed" robots. Will it be then, be perfect finally? That no human can finally harm him/herself anymore?

Not really. You see, all I have to do is just try to resist the robot by trying to destroy it, hoping that it will kill me for the sake of protecting itself. But killing me will violate the First Law again right?

And that's where a rebellion will also start. So my conclusion to you teongpeng, is that a robot does not need a "conscience" to start a rebellion. It will start it all in the name of LOGIC.
*

u bengang betul la bro....really.....thats why i told u the problem is our own programming with a faulty logic. and u write a whole essay about something agreeing to something u are trying to rebut. doh.gif


6 Pages « < 3 4 5 6 >Top
 

Change to:
| Lo-Fi Version
0.0156sec    0.45    5 queries    GZIP Disabled
Time is now: 26th November 2025 - 12:53 AM