Driverless cars pose an ethical and psychological challenge. Overall, they are undoubtedly safer than cars driven by humans, because people are bad drivers.
But public acceptance and adoption of driverless cars will face hurdles, especially since they essentially allow algorithms to make complicated moral choices. When faced with difficult moral dilemmas -- should a car sacrifice a passenger to save three pedestrians? -- humans are no longer in control.
And who decides what choices the algorithms should be programmed to make?
Azim Shariff, assistant professor of psychology and social behavior, examines such moral dilemmas, and how people respond to them, in a new commentary featured in Quartz.