Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

Systems Sciences Robots & AI, Now vs The Future

views
     
tgrrr
post Jul 7 2009, 05:59 PM

Enthusiast
*****
Senior Member
939 posts

Joined: Jan 2003
From: Penang
What's with all the post bashing in a science labs? Dreamer merely state his view from the engineering point of view which is valid.


Anyway, I don't think robots and AI are the same thing, and this topic is somewhat ambiguous.
Robots can include a variety of things including those robotic arms that put pieces together in an assembly line, or those explorer bots that are sent into narrow/dangerous-to-human places to search for survivors. In essence, robots carries out a series of pre-programmed task.

What is AI on the other hand is still a highly debatable subject. The arguments about souls, feelings, sentient, minds, they are all part of the big debate about AI if anyone is interested to talk about.
The Turing Test which is essentially propoesed to test for machine intelligence has been conceived since 1950 and until now none has been able to passed the test. This just shows how little we understand about AI and how far away we really are from those sci-fi books and movies. Btw I don't agree with the Turing test concept, I'm a strong proponent of the Chinese Room Argument.

So what is Artificial Intelligence?
Are we just commenting on those seemingly intelligent robotic pets that can response to their owner or otherwise show their moods on lcd screens or those AI bots on the internet that you can apparently chat with?
tgrrr
post Jul 8 2009, 01:43 PM

Enthusiast
*****
Senior Member
939 posts

Joined: Jan 2003
From: Penang
QUOTE(vivienne85 @ Jul 8 2009, 09:44 AM)
+1..
We may be able to copy the brain design in the future yet we may not be able to understand the intricate design of the brain completely.
*
Assuming we managed to do that, doesn't that mean we just cloned the human brain? Would you call that artificial intelligence or just an artificially created human brain?


QUOTE(lin00b @ Jul 8 2009, 01:31 PM)
for low level job, yes, simple programming is adequate. what about a robot to take care of a baby? or to maintain a factory? explore uncharted area?
*
IMHO I think the fundamental advantage of AI is the ability to learn and adapt to new task/environment.
Application wise this means the same floor washing robot can also be taught to mow the lawn or a variety of other tasks by some simple instructions/examples similar to how you'd perhaps teach a human child. Of course from the Engineering POV, Dreamer's right, AI hasn't reach such level of technology yet to be applicable.


Added on July 8, 2009, 3:55 pm
QUOTE(transhumanist92)
A frequently mentioned reason for the likelihood of human-equivalent AI being created within decades rather than longer is the fact that affordable computing power is approaching most estimates of human brain processing power.

100 billion neurons firing at 200 Hz — this is a basic neurological fact. Yes, there are many additional shades of complexity, including dendritic spines, neurotransmitter concentrations, and so on. Still, all of these put together seem to change the estimated computational requirements by no more than 2-3 orders of magnitude.
A certain amount of computing power can be thought of as equivalent to the human brain processing power if we assume the human brain processing power to be a roughly finite number, nothing wrong about that.


QUOTE(transhumanist92)
I can tell that I am speaking with an ideologue when they are unaware of the facts mentioned above, are informed of them, but that information then has no impact whatsoever on their subjective probability estimates of human-equivalent AI being created in the next few decades. Many people seem to act as if computing power has no influence whatsoever.

In contrast, Ray Kurzweil, Hans Moravec, and some other advocates of strong AI have seemingly acted as if computing power is everything — that when we have human-equivalent computing power, we’ll immediately have human-equivalent AI. That is wrong too.

It is easy to take the middle path. Particularly when the notion of human-equivalent computing power being available is combined with neural data from extremely high-resolution brain scans (a brute force argument for the eventual plausibility of human-equivalent AI if there ever was one), critics begin to sound incredulous when they do not revise their probability estimates for AI whatsoever.
So are you taking the middle path?
A lot of computing power appears to be great but is it the essential ingredient in order to have intelligence?
As a programmer I do not see how by just having much more computing power is considered as intelligent since the computer only executes what it is programmed to do.
Some have suggested it's not the amount of teraflops of computing power but the amount of information that can be stored, retrieved and reconstructed that give rise to intelligence i.e. the memory prediction hypothesis.
If such is really the case, then the amount of computing power becomes irrelevant as to when we'll be able to develop human-equivalent AI. We may just need the right neural structure or perhaps the right language to write an AI program.


QUOTE(transhumanist92)
If I had a computer faster than most expert estimates of human brain computing power and an extremely high resolution scan of the human brain, the burden of proof would be on the critics to say why I couldn’t create a human-equivalent AI immediately. The objections here tend to circulate around dualism, mysticism, biology-worship, quantum mumbo-jumbo, etc.
Yet, if we had sufficiently high-resolution scanners, we could just copy the brain’s design without understanding it.
*
No, the burden of proof would be on the proponent of the idea to proof that it has the intelligence equivalent of a human, or it's just another faster computer or fancy machinery. That's what the Turing Test is for - to test for machine intelligence.

This post has been edited by tgrrr: Jul 8 2009, 03:55 PM
tgrrr
post Jul 8 2009, 10:12 PM

Enthusiast
*****
Senior Member
939 posts

Joined: Jan 2003
From: Penang
Somehow, I feel that this thread doesn't have a clear direction.
And the TS ain't providing any further input or guidance beyond the first post ain't helping either.
We'll just end up wasting our time and energy continuing this headless debate.
tgrrr
post Jul 9 2009, 11:17 AM

Enthusiast
*****
Senior Member
939 posts

Joined: Jan 2003
From: Penang
Yes. In the past, a lot of focus has been on processing power, getting more and more processing power. But in the end does it really answer how does more processing power give rise to intelligence.
One can say by using brute force, but the Chinese Room Argument clearly highlights proof of intelligence has to include proof of the machine being capable of comprehension or perhaps even self-awareness before it can be said as having any kind of intelligence.
tgrrr
post Jul 10 2009, 09:39 AM

Enthusiast
*****
Senior Member
939 posts

Joined: Jan 2003
From: Penang
No I think insect intelligence or swarm intelligence is a bunch of independent simple units that interacts with each other and the environment and producing a self-organizing and seemingly intelligent behaviour. It's like the main antagonists in the "Prey" by the late Micheal Crichton. For example some ant species can build monumental and very architecturally challenged structure without having the same kind of human intelligence.
Perhaps the simplest account of self-organizing behaviour is prey flocking, where simple-minded organism will flock together in the presence of predator and apparently confuse predator from isolating out and attacking individual prey.
tgrrr
post Jul 10 2009, 05:43 PM

Enthusiast
*****
Senior Member
939 posts

Joined: Jan 2003
From: Penang
But if they are pure trial and error, there would be many failures before they get one that's works and nature is seldom that inefficient. Like for example those 8 meters tall termite monoliths built by 1cm size termites.
The interesting thing is, even if they had the whole blueprint in their DNA, they still need to coordinate their building effort, or a loop sided structure can easily go tumbling down.
tgrrr
post Jul 12 2009, 01:24 PM

Enthusiast
*****
Senior Member
939 posts

Joined: Jan 2003
From: Penang
QUOTE(Thinkingfox @ Jul 11 2009, 07:30 PM)
I'm sure it's not pure trial and error, but also governed by instincts. But I'm also quite sure not all ants build structures which are identical, right? I'm sure the same species in different areas (with different environments) have slightly different methods of doing things. These differences are probably due to different results from trial and error.
*
We haven't yet clearly define what is intelligence.
The definition of trial and error itself imply there could be some kind of learning process, whereby the same error is not repeated and this could very well indicate some kind of intelligence.
But many other animals could have the same kind of intelligence, except they are made of a single organism.

Anyway those termites monolith are regarded as one of the seven natural architectural wonder of the world.
The rough comparison given is humans would have to be building 1km tall skyscrapper but human can't scale walls and I'm no architect so I can't verify that.
tgrrr
post Jul 14 2009, 06:04 PM

Enthusiast
*****
Senior Member
939 posts

Joined: Jan 2003
From: Penang
QUOTE(dreamer101 @ Jul 13 2009, 08:31 PM)
Thinkingfox,

I thought they use MRI to scan the brain.  But, the point is STILL VALID.  We DO NOT KNOW how much of our brain is used.  And, a lot of our so called "KNOWLEGDE" could be something that is pre-existing in our brain.

Dreamer
*
And we also do not know how our brain is being used.
I'm disappointed nobody is interested in memory-prediction framework hypothesis.

 

Change to:
| Lo-Fi Version
0.1046sec    0.36    6 queries    GZIP Disabled
Time is now: 5th December 2025 - 12:02 AM