Do you think its possible for machines to control human life one day?
We are already surrendering control of our lives to machines bit by bit, from the ECU in your car to autopilot software to the health support machines in the hospital.
Isaac Asimov wrote the 3 laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But that's sci-fi. In real life, there are no such rules. You can develop AI, robotics and internet computing to anything you fancy, unlike cloning. Nothing stops you from developing a monster robot or software that takes down everything connected to the internet, and probably a few countries along with it.
So should developers be subject to strict ethical rules? Who's going to police it and stop rogue developers from unleashing bots that create havoc in other people's lives?
More importantly, do you think the tipping point will happen? As in the day when machines and software lock out humans and start doing their own thing, out of control, and even impose control over us for our own good - the Terminator scenario? (actually self-replicating viruses and worms are already going out of control, causing economic damage...)
Science Will the Terminator-style doomsday ever happen?, A question about AI & robotics
May 18 2010, 10:56 AM, updated 16y ago
Quote

0.0251sec
0.43
6 queries
GZIP Disabled