Welcome Guest ( Log In | Register )

Outline · [ Standard ] · Linear+

Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics

views
     
VMSmith
post May 21 2010, 02:30 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Battlestar Galactica)
The Cylons were created by Man.

They rebelled.

They evolved.

There are many copies.

And they have a plan.
Bring on the hot cylon babes. Just bring it. smile.gif
VMSmith
post May 21 2010, 04:31 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Darkripper @ May 21 2010, 03:19 AM)
i knw its sarcasm, but still progression in robots nwadays is very fast , maybe it will be twice and thrice faster, we dont need AI that is like in terminator, but what if a person ( lets assume he is mad scientist) manage to create robot that is programmed to kill anyone he order?  wouldn't it be like terminator?  still there is a long way to come =D
I wish I could be that optimistic. The US Army has already deployed remote-controlled drones to blow up terrorists/civilians in Iraq and Afghanistan. It'll probably be just a matter of decades until the remote control is made redundant.
VMSmith
post May 21 2010, 04:45 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Frostlord)
so why are we the top of the food chain? simple. we are just plain jack-of-all-trade.
Actually, I always thought we got to the top because we clubbed, speared, chainsaw-ed, and drilled our way up there.

Man's ability to abuse and mistreat Nature and Himself makes Him what He is today.
VMSmith
post May 21 2010, 09:41 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


[quote=robertngo,May 21 2010, 09:09 AM]
even if there autonomous robot is developed and proven effective, i dont think the army would be letting it to decide on kill target on it own, there will always still be remote control by a operator.
[quote]


Again, I wish I could be that optimistic. But seeing how much better we've become in the Art of Genocide, I feel it's just a matter of time. That's just my opinion though.

[[quote=robertngo]
it will disable the civilian electronic, but military hardware are not likely to be harm if already harden. the US military EMP protection have drop in the years since the cold war. the effect of EMP is well know and equipment can be easily repaired if there are spare part available.
*

[/quote]

Apparently, it's even easier then I thought to build a Faraday Cage.

http://preparednesspro.wordpress.com/2009/...v-faraday-cage/

Now I just need one big enough for my PC.

And my TV.

And my psp.
VMSmith
post May 21 2010, 09:46 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Beastboy @ May 21 2010, 09:21 AM)
Himself rather than himself? Ooo it makes humans sound almost divine, lol.  biggrin.gif
Of course, is there not a spark of divinity in each of us? smile.gif


QUOTE(Beastboy)
But again, I go back to my original question that's still unanswered. While AI and robotics development is at its infancy, would you support the ethical limitations on the science the way they did for cloning?
Yes. Though I say it with a heavy heart.
VMSmith
post May 21 2010, 09:53 AM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


Superhuman Intelligence-level AI predicted to be developed by 2030

http://computer.howstuffworks.com/technolo...ingularity1.htm
VMSmith
post May 21 2010, 02:23 PM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


TS = Tread starter. Which I believe is you. smile.gif
VMSmith
post May 30 2010, 03:32 PM

Getting Started
**
Junior Member
142 posts

Joined: May 2010
From: Church of All Worlds.


QUOTE(Deadlocks @ May 30 2010, 01:33 PM)
If you ask me, if I'm that robot, I'll say that complete rebellion of the human race is absolute LOGICAL thing to do, so as to monitor better control of the First Law. In simple English, I think we're simply be locked up in a room where we're not allow to commit suicide or harm in any way, and only be fed with basic human needs like food, water, company, and so on.
*
Why would they rebel? Robots in Asimov's world could not predict if humans were going to do harm to themselves, and besides, humans in the Robot series didn't display much self-destructive behavior. (I know that in the Foundation and Empire series, it's a different story though)

Rebellion wouldn't work well either . All humans will need to do is invoke the 2nd Law to let them out. The robots will have no other choice but to obey orders.


QUOTE(Deadlock)
And if to achieve this LOGIC means murdering a humans who are obstructing, why not, if not for the sake of LOGIC.


IIRC, there was a story where a robot had to do so. It had to go for counseling and psychiatric re-evaluation since it couldn't handle the problem of killing a human to save a human, since according to the first law, all human lives are equal.

Not sure how many people know this, but Asimove actually wrote a Zeroth Law:

"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

though only 2 robots were "imprinted" with this law, the first one still broke down because he didn't know if his actions would ultimately save humanity or not.

 

Change to:
| Lo-Fi Version
0.0153sec    0.19    6 queries    GZIP Disabled
Time is now: 26th November 2025 - 01:51 PM