The moral thicket slowing adoption of driverless cars


Associate Professor Azim Shariff examines what is needed to spur greater acceptance of driverless cars.

The technical hurdles to driverless cars are nearly flattened, and the vehicles are just around the corner, but passengers have trust issues.

When riding in an autonomous vehicle, passengers give life-and-death decisions over to algorithms. And when it comes to moral choices – should a car sacrifice its passenger to, for example, avoid killing a couple pedestrians? – passengers in driverless cars are forfeiting their agency.

Hence the trust issues, writes Azim Shariff, associate professor of psychology and social behavior, in a commentary for Nature Human Behaviour, along with Jean-Francois Bonnefon of the Toulouse School of Economics and Iyad Rahwan of the Massachusetts Institute of Technology. Most – 78 percent – of Americans are afraid of riding in an autonomous vehicle, and only 19 percent trust such cars.

And that lack of trust could stall out the autonomous vehicle industry, despite the fact that driverless cars are, on the whole, safer than human drivers.

Auto manufacturers have to program driving algorithms to make certain moral decisions. The algorithms can either treat every life the same, and thus in extreme situations be willing to sacrifice a passenger to save a group of pedestrians, or they can place the life of the passenger above the lives of others – and thereby potentially threaten two or more pedestrians to save just one passenger.

In other words, manufacturers must choose whether to make morally utilitarian cars, or preferentially self-protective ones.

The problem is that while people say they want cars to be utilitarian and believe this is the most ethical option, as consumers they want to buy the self-protective cars. So, car manufacturers face a dilemma: make their cars self-protective and risk public outrage, or make the cars utilitarian and risk no one buying them.

Since regulators are unlikely to allow on the market cars that always protect their passenger regardless of the harm to others, Shariff says that people will need to be convinced of the cars' overall better safety level even if they are programmed to prioritize others’ lives over those of passengers. And manufacturers would do well to design cars that conspicuously signal their owner’s virtue – a powerful way of convincing people to buy ethical products.

"Consumers are understandably reluctant to invest in a product that could be programmed to purposefully harm them. Nothing on the market has done that before," Shariff says. "But if they can be convinced of the product’s superior overall safety – and if they can use these safer, more efficient cars as signals of their own virtue – then they’d be more inclined to buy it."

Another hurdle is that people, and the media, are likely to fixate on every failure of autonomous cars, while giving less attention to the small optimizations and great safety improvements they offer.

For instance, while roughly 40,200 Americans died in traffic accidents in 2016, the first fatality resulting from Tesla's Autopilot feature generated coverage by every major news organization. Such outsized media coverage feeds people's fears and could deter customers, or prompt suffocating regulations and large liability issues.

Ultimately, a new social contract must be agreed upon – and fast. The rules and norms that allowed cars to dominate the landscape were refined over the course of decades, starting more than a century ago. The transition to driverless cars will be much quicker.

"Every day that goes by without driverless cars, people die. The truth is that humans are bad drivers, and driverless cars are predicted to be much safer. To ensure that we reach mass adoption as soon as possible, we need to sort out these issues of trust," Shariff says.

-September 29, 2017

Share