Site icon Singularity Hub

Programming Self-Driving Cars Makes People Less Selfish

autonomous self driving cars in tunnel future

Self-driving cars are just around the corner, but working out the rules that should govern them is proving tricky. Should they mimic often self-interested human decision making, or be programmed to consider the greater good? It turns out that when you let people program autonomous vehicles themselves, the gap between self-interest and the greater good shrinks.

Much of the focus in this area has been on the most pressing moral dilemmas—how should autonomous vehicles behave in cases of life or death? A 2016 study found that people broadly supported utilitarian programming models that save the lives of the most people even if it puts the occupants at risk. But perhaps unsurprisingly, they also said they would be less willing to buy a vehicle that might sacrifice themselves to save others.

A few months ago, the same team published a global survey of attitudes to self-driving cars that found that the moral principles people thought should be used to program them varied considerably between countries.

A recent paper in PNAS, however, focused on more common social dilemmas that don’t involve mortal danger but still pit individual interests against the collective. Drivers already navigate these kinds of situations every day; slowing down to let someone pull out might add a few seconds to your commute, but if everyone does it, traffic flows more smoothly.

The authors concede that decisions about how to program self-driving vehicles to deal with these situations aren’t going to be purely down to the owner; manufacturers and regulators are likely to play a big part. But they wanted to find out how the act of programming these decisions ahead of time rather than making them on the fly would impact people’s choices.

There’s already a significant body of research showing that getting people to make decisions ahead of time results in fairer and less selfish decision-making. The new paper found this was replicated when applied to the context of programming an autonomous vehicle.

The researchers devised a computerized experiment based on the classic prisoner’s dilemma, in which players have to choose to cooperate or defect. Four participants recruited from the Amazon Mechanical Turk crowd-sourcing platform were put in control of one car each, and had to decide whether or not to turn on the AC every time it stopped.

Keeping the AC off was characterized as being for the collective good, because it reduces fuel burn and therefore damage to the environment. But there were also financial rewards that varied based on how many people decided to cooperate or defect in each round. These were aligned in such a way that players were incentivized to defect, but the outcome if everyone defects is worse than if everyone cooperates.

Each game saw the car stop 10 times, but while half the participants made their decisions every time the car stopped—like they would if they were driving themselves—the other half made their decisions for all 10 stops right at the start, as if they were programming their self-driving car.

Across a series of different experiments, the researchers found the people who programmed their cars in advance were consistently more cooperative than those who made their decisions on the fly.

In an attempt to find out why, the researchers carried out tests where the interface of the game emphasized different aspects of the challenge (i.e. focus on self vs. the collective or focus on the monetary reward vs. the environment) and also performed an analysis of participants’ self-reported motivations.

The results indicated that the act of programming their vehicles in advance made them less focused on the short-term financial reward. Interestingly, in another experiment where participants could reprogram cars after each round, they still cooperated more than those making decisions directly. That’s significant, the researchers say, because manufacturers will likely let customers tweak their car’s settings based on their driving experience.

This kind of research might seem quite abstract, and the particular rewards and motivations used in this experiment could be seen as divorced from the actual process of driving. But the underlying finding that separating people from immediate decision-making— something self-driving cars will certainly do—makes them more cooperative could be highly relevant as we increasingly rely on machines to act for us.

There’s been almost universal consensus that self-driving cars are, on average, going to be safer, greener, and more efficient. But recent reports that self-driving cars will be incentivized to cruise around cities at low speed rather than parking, causing a potential congestion and pollution nightmare, highlight the potential pitfalls ahead.

Harnessing people’s apparent willingness to be more altruistic when operating through an autonomous machine could be a way to ensure the world of the future is a more cooperative place.

Image Credit: ssguy / Shutterstock.com

Exit mobile version