The Ethics of Self-Driving Technology

As the general population buckles down for the onset of autonomous vehicles and robot-assisted life in general, the ethical and technical hiccups of the technology have begun to reveal themselves. Programmers have to determine what a vehicle will do given a myriad of situations, and some of them bring up moral questions that are more common in philosophy classes than coding think tanks.

dumpFor example, what should an autonomous car do if it’s given two options, one of which will endanger its passenger’s life and one of which will endanger the lives of a handful of pedestrians? That’s what one international group of researchers asked 2,000 American residents through six online surveys. The questions varied somewhat in terms of the number of people that would be sacrificed for the life of the passenger (the responder was put in the position of the endangered passenger), but always weighed multiple lives against the safety of the passenger.

A potential situation where this could happen in real life could be if a car was driving up a steep mountain pass and turned a corner only to be confronted by a group of children crossing the street without adequate time to brake; does the car veer off the road to the passenger’s sure demise, or hit the children to the children’s sure demise? What happens if there are multiple passengers in the car? The decision would be made in a split second, and both choices are basically unforgivable.

However, these are choices that must be programmed into autonomous cars, and they must be programmed to make the “right” decision or else the cars’ manufacturers could be held liable.

So what would the “right” decision be in that situation? Like most ethical choices, it’s easier determined through democracy than derived from truth. The survey conducted on Americans found that most people believed that the driverless cars should attempt to minimize the total number of deaths, even at the expense of the occupants of the car.

“It seems that from the responses people gave us, saving their coworkers was not a priority,” explained Jean-Francois Bonnefon of the Toulouse School of Economics. That said, overall “do the greater good” seemed to be the major winning philosophy, even given that there be children in the car.

ethics of self driving carsWhile that may seem like a helpful consensus, when the same survey takers were asked if they would buy one of those autonomous cars for their families, they tended to say no way. People want cars that protect them and the passengers at all costs, and that means they want to be at the wheel as opposed to an algorithm that’s programmed to make the right decision.

The true irony to the entire situation? All this discussion about this particular philosophical query is distracting from the fact that autonomous technology is, for the most part, life-saving technology that could limit the amount of vehicle-related deaths by an amazing and incredibly helpful amount. Car crashes kill around 30,000 Americans every single year, and if that number can be cut down substantially by self-driving cars, it’s a clear indicator of exactly what decision could be made for the greater good.

No comments yet... Be the first to leave a reply!

Leave a Comment

 

— required *

— required *