You have commented 339 times on Rantburg.

Your Name
Your e-mail (optional)
Website (optional)
My Original Nic        Pic-a-Nic        Sorry. Comments have been closed on this article.
Bold Italic Underline Strike Bullet Blockquote Small Big Link Squish Foto Photo
-Signs, Portents, and the Weather-
Scientists: Robots Could be Programmed to Kill You 'For the Greater Good'
2014-05-15
[INFOWARS] As the United Nations debates legislation that could outlaw 'killer robots', scientists predict that artificially intelligent systems could one day decide to kill humans "for the greater good."

In an article for Popular Science, Erik Sofge outlines a scenario whereby robot cars would decide to sacrifice their human owner in order to prevent a collision that could kill more people.

A front tire blows, and your autonomous SUV swerves. But rather than veering left, into the opposing lane of traffic, the robotic vehicle steers right. Brakes engage, the system tries to correct itself, but there's too much momentum. Like a cornball stunt in a bad action movie, you are over the cliff, in free fall.

Your robot, the one you paid good money for, has chosen to kill you. Better that, its collision-response algorithms decided, than a high-speed, head-on collision with a smaller, non-robotic compact. There were two people in that car, to your one. The math couldn't be simpler.

Sofge cites an opinion piece by Patrick Lin, an associate philosophy professor and director of the Ethics + Emerging Sciences Group at California Polytechnic State University, in which Lin delves into the "legally and morally dangerous paths" presented by the emergence of robotic vehicles.
Ummm... Okay. A professor of philosophy. Can't think of anybody else who knows more about math and robotics than a professor of philosophy. Can you?
Posted by:Fred

#7  The most amazing thing about this story is that a guy with a philosophy degree actually has a job...
Posted by: tu3031   2014-05-15 18:23  

#6  Oshkosh unmanned mine-sweeper truck.
Posted by: Besoeker   2014-05-15 14:44  

#5  How does the robot in the car know that a)The other car has two passengers and b) you are the only one in your car? What if you have your entire family with you?

Seems to me Asimov had stories around this sort of dilemma. The end always seemed to be that the robot ended up being damaged because it was faced with two impossible choices.
Posted by: Rambler in Virginia   2014-05-15 14:26  

#4  Then again.....it does fit the liberal narrative - one set of rules for me, another set of rules for thee.
Posted by: Procopius2k   2014-05-15 14:00  

#3  Where does the prohibition on a Robot killing a human fit into this equation?

They get right on that, just after Obama signs off on another drone strike hit. Let's see, you want humans lacking in morality or ethics to program it into machines?
Posted by: Procopius2k   2014-05-15 13:59  

#2  Dr. Kermit Barron Gosnell's defense team might be found negligent in failing to present "greater good" argument.
Posted by: Besoeker   2014-05-15 13:00  

#1  Issac Asimov couldn't be reached for comment.

This isn't a question of math and robotics, it is a question of ethics and morality. Where does the prohibition on a Robot killing a human fit into this equation?

Posted by: AlanC   2014-05-15 12:45  

00:00