New Test Helps Driverless Cars Make ‘Moral’ Decisions

'What constitutes a moral decision when we’re behind the wheel?'

Autonomous vehicles with digital sensor visualization on multi-lane highway.
Autonomous vehicles with digital sensor visualization on multi-lane highway.
iStock/gorodenkoff

Researchers have validated a technique for studying how people make “moral” decisions when driving, with the goal of using the resulting data to train the artificial intelligence used in autonomous vehicles. These moral psychology experiments were tested using the most critical audience researchers could think of: philosophers.

Veljko Dubljević, a professor in the Science, Technology & Society program at North Carolina State University, said, “Very few people set out to cause an accident or hurt other people on the road. Accidents often stem from low-stakes decisions, such as whether to go five miles over the speed limit or make a rolling stop at a stop sign. How do we make these decisions? And what constitutes a moral decision when we’re behind the wheel?

“We needed to find a way to collect quantifiable data on this, because that sort of data is necessary if we want to train autonomous vehicles to make moral decisions,” Dubljević says. “Once we found a way to collect that data, we needed to find a way to validate the technique – to demonstrate that the data is meaningful and can be used to train AI. For moral psychology, the most detail-oriented set of critics would be philosophers, so we decided to test our technique with them.”

The technique the researchers developed is based on the Agent Deed Consequence model, which posits that people take three things into account when making a moral judgment: the agent, which is the character or intent of the person who is doing something; the deed, or what is being done; and the consequence, or the outcome that results from the deed.

Specifically, the technique tests how people judge the morality of driving decisions by sharing a variety of traffic scenarios with test subjects, and then having the test subjects answer a series of questions about moral acceptability and various aspects of what took place in each scenario.

For this validation study, the researchers enlisted 274 study participants with advanced degrees in philosophy. The researchers shared driving scenarios with the study participants and asked them about the morality of the decisions that drivers made in each scenario. The researchers also used a validated measure to assess the study participants’ ethical frameworks.

Dubljević said, “Different philosophers subscribe to different schools of thought regarding what constitutes moral decision-making. For example, utilitarians approach moral problems very differently from deontologists who are very focused on following rules. In theory, because different schools of thought approach morality differently, results on what constituted moral behavior should have varied depending on which framework different philosophers used.

"What was exciting here is that our findings were consistent across the board. Utilitarians, deontologists, virtue ethicists – whatever their school of thought, they all reached the same conclusions regarding moral decision-making in the context of driving. That means we can generalize the findings. And that means this technique has tremendous potential for AI training. This is a significant step forward. The next step is to scale up testing among broader populations and in multiple languages, with the goal of determining the extent to which this approach can be generalized both within western culture and beyond.”

More in Software