Can We Trust Robot Cars to Make Hard Choices?

The ethics of robot cars has been a hot topic recently. In particular, if a robot car encounters a situation where it is forced to hit one person or another—which should it choose and how does it make that choice? It’s a modern version of the trolley problem, which many have studied in introductory philosophy classes.

Imagine a robot car is driving along when two people run out onto the road, and the car cannot avoid hitting one or the other. Assume neither person can get away, and the car cannot detect them in advance. Various thinkers have suggested how to make an ethical decision about who the car should hit:

  • The robot car could run code to make a random decision.
  • The robot car could hand off control to a human passenger.
  • The robot car could make a decision based on a set of pre-programmed values by the car’s designers or a set of values programmed by the owner.

The last of these deserves a little more detail. What would these values be like?

robot-cars-making-hard-decisions-3
How will robot cars make ethical decisions?

They might, for example, tell the car to hit an adult over a child or a sturdier person over a smaller, more vulnerable person. The car might even try to calculate the value of one life over another—using facial recognition, it might hit the criminal who just murdered someone rather than the scientist working on a cure for cancer.

In each of these examples, however, the computer is leaving the decision to chance, outsourcing it to someone else, or using pre-programmed values to make a decision.

Humans do the exact same things. When faced with decisions, we flip coins, ask others to decide for us, or look to various moral authorities for the right answer.

However, as humans, we also do something else when faced with hard decisions: In particularly ambiguous situations, when no choice is obviously best, we choose and justify our decision with a reason. Most of the time we are not aware of this, but it comes out when we have to make particularly hard decisions.

The truth is, the world is full of such hard decisions—determining how robot cars (or robots generally) can appropriately deal with such choices will be critical to their development and adoption.

To figure out how machines might make these hard choices, it’s a good idea to look into how humans make them. In her TED talk, “How to Make Hard Choices,” Dr. Ruth Chang argues hard decisions are defined by how alternatives relate to one another.

In easy decisions, for example, one alternative is clearly better than another. If we prefer natural colors to artificial colors, it is easy to choose to paint our room light beige over fluorescent pink. With hard decisions, however, one alternative seems better in some ways and the other better in different ways.

But neither is better overall.

We may have to choose between taking a job offer in the countryside or keeping our current job in the city. Perhaps we equally value living in the city and the challenge of the new job. So, we’re stuck because both alternatives appear equal. In this case, she argues, to make a meaningful decision, we must actually go back and reevaluate our original values: What is actually more important to us? Living in the city or our job?

Alternatives in hard decisions are not easily quantifiable.
Alternatives in hard decisions are not easily quantifiable.

Critically, she says, when we make our decision, we get to justify it with a reason.

Whether we prefer beige or fluorescent colors, the countryside or a certain set of job activities—these are not objectively measurable. There is no ranking system anywhere that says beige is better than pink and that living in the countryside is better than a certain job. If there were, all humans would be making the same decisions. Instead, we each invent reasons to make our decisions (and when societies do this together, we create our laws, social norms and ethical systems.)

But a machine could never do this…right? You’d be surprised. Google recently announced, for example, that it had built an AI that can learn and master video games. The program isn’t given commands but instead plays games again and again, learning from experience. Some have speculated that such a development would be useful for a robot car.

How might this work?

Instead of a robot car making a random decision, outsourcing its decision or reverting to pre-programmed values to make a decision—it could instead scour the cloud processing immense amounts of data and patterns based on local laws, past legal rulings, the values of the people and society around it, and the consequences it observes from various other similar decision-making processes over time.

In short, robot cars, like humans, would use experience to invent their own reasons.

robot-cars-making-hard-decisions-5What is fascinating about Chang’s talk, is that she says when humans engage in such a reckoning process—of inventing and choosing one’s reasons during hard times—we view it as one of the highest forms of human development.

Asking others to make decisions for us, or leaving life to chance, is a form of drifting. But inventing and choosing our own reasons during hard times is referred to as building one’s character, taking a stand, taking responsibility for one’s own actions, defining who one is, and becoming the author of one’s own life.

Furthermore, as humans, we put our trust in such people.

No one with common sense would entrust their life, well-being, or money to a person who makes random decisions, asks others to decide everything for them when the going gets tough, or is drifting through life.

We trust others when we have insight into their values and decision-making process and know that they will stand up for those values—and it may take the same degree of understanding for us to trust machines.

Unfortunately, the general public is far-removed from understanding how artificial intelligence makes decisions. The creators of robot cars, drones and other “thinking” machines may have an incentive to keep this information private due to intellectual property concerns or security reasons. And many in the general public may find artificial intelligence inscrutable or may fear it is too difficult to understand.

robot-cars-making-hard-decisions-6And here we must return to Chang’s concluding point. As we get closer to stepping into robot cars, inviting robots into our homes, and allowing more robots and drones into law enforcement and the military, we must not just drift along. The public must educate itself about how these machines think, and companies and governments need to be more transparent about this information.

In some cases, we might conclude robots can make better decisions than humans. So far, robot cars have the better track record—700,000 accident-free miles as of last April (and more by now). In fast-paced scenarios, humans are not well equipped to make hard decisions, often defaulting to instinct. In other cases, however, the risk may prove to be too high.

And we will be asked to make even tougher choices.

In a world where artificial intelligence may think, but not necessarily care if it is rewarded or punished for the type of decision it makes, we will need to develop new mechanisms beyond our current punishment-based justice system to keep the peace. And if there is a large power difference between humans and artificial intelligence, how one enforces laws becomes a pressing challenge.

As we face these hard decisions, we must do more than just drift along. We must decide what is most important to us and how we want to be the authors of our own lives in a world shared with robots. The question might not be if robots can make hard decisions, but if humans can.

Image Credit: Shutterstock.com

Darlene Damm
Darlene Damm
Darlene Damm is faculty chair and head of social impact at Singularity University. She has spent nearly two decades working on moonshots and initiatives designed to solve our world’s toughest social problems and empower people to create abundant futures. At Singularity University, Darlene focuses on helping people understand how exponential technologies are creating abundance in the global grand challenge areas, as well as articulating and preparing for new social challenges created by exponential technologies including technological unemployment, inequality, and ethical issues. Darlene has a broad background spanning across both technology and social change. In 2012, she founded DIYROCKETS, the first company to crowdsource space technology, and in 2011 was an early cofounder of Matternet, one of the world’s first companies using drones for commercial transport and delivery of medical goods in the developing world. Darlene served with Ashoka, the world’s largest association of social entrepreneurs for nearly ten years where she built the organization’s fundraising system (raising over $30 million per year) and led Ashoka’s presence in the Silicon Valley launching major partnerships with companies such as Google, LinkedIn, and Facebook. In addition, she helped launch Ashoka’s StartEmpathy initiative which has scaled to over 30 countries ensuring young children grow up learning empathy and changemaking as core skills for the 21st century. Prior to that, Darlene spent over a decade working in Vietnam, Myanmar, Indonesia, East Asia, and the US on educational and economic programs that empowered youth and helped bring developing nations into the global economy. She received her bachelor’s degree in history from Stanford University and her master’s degree in international affairs from Johns Hopkins SAIS. She was a Fellow with Japan-US Community Education and Exchange and a graduate of Singularity University. She holds a patent and regularly speaks around the world and publishes on the topic of technology, innovation, and social change.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured