The ethics of robot cars has been a hot topic recently. In particular, if a robot car encounters a situation where it is forced to hit one person or another—which should it choose and how does it make that choice? It’s a modern version of the trolley problem, which many have studied in introductory philosophy classes.
Imagine a robot car is driving along when two people run out onto the road, and the car cannot avoid hitting one or the other. Assume neither person can get away, and the car cannot detect them in advance. Various thinkers have suggested how to make an ethical decision about who the car should hit:
- The robot car could run code to make a random decision.
- The robot car could hand off control to a human passenger.
- The robot car could make a decision based on a set of pre-programmed values by the car’s designers or a set of values programmed by the owner.
The last of these deserves a little more detail. What would these values be like?
They might, for example, tell the car to hit an adult over a child or a sturdier person over a smaller, more vulnerable person. The car might even try to calculate the value of one life over another—using facial recognition, it might hit the criminal who just murdered someone rather than the scientist working on a cure for cancer.
In each of these examples, however, the computer is leaving the decision to chance, outsourcing it to someone else, or using pre-programmed values to make a decision.
Humans do the exact same things. When faced with decisions, we flip coins, ask others to decide for us, or look to various moral authorities for the right answer.
However, as humans, we also do something else when faced with hard decisions: In particularly ambiguous situations, when no choice is obviously best, we choose and justify our decision with a reason. Most of the time we are not aware of this, but it comes out when we have to make particularly hard decisions.
The truth is, the world is full of such hard decisions—determining how robot cars (or robots generally) can appropriately deal with such choices will be critical to their development and adoption.
To figure out how machines might make these hard choices, it’s a good idea to look into how humans make them. In her TED talk, “How to Make Hard Choices,” Dr. Ruth Chang argues hard decisions are defined by how alternatives relate to one another.
In easy decisions, for example, one alternative is clearly better than another. If we prefer natural colors to artificial colors, it is easy to choose to paint our room light beige over fluorescent pink. With hard decisions, however, one alternative seems better in some ways and the other better in different ways.
But neither is better overall.
We may have to choose between taking a job offer in the countryside or keeping our current job in the city. Perhaps we equally value living in the city and the challenge of the new job. So, we’re stuck because both alternatives appear equal. In this case, she argues, to make a meaningful decision, we must actually go back and reevaluate our original values: What is actually more important to us? Living in the city or our job?
Critically, she says, when we make our decision, we get to justify it with a reason.
Whether we prefer beige or fluorescent colors, the countryside or a certain set of job activities—these are not objectively measurable. There is no ranking system anywhere that says beige is better than pink and that living in the countryside is better than a certain job. If there were, all humans would be making the same decisions. Instead, we each invent reasons to make our decisions (and when societies do this together, we create our laws, social norms and ethical systems.)
But a machine could never do this…right? You’d be surprised. Google recently announced, for example, that it had built an AI that can learn and master video games. The program isn’t given commands but instead plays games again and again, learning from experience. Some have speculated that such a development would be useful for a robot car.
How might this work?
Instead of a robot car making a random decision, outsourcing its decision or reverting to pre-programmed values to make a decision—it could instead scour the cloud processing immense amounts of data and patterns based on local laws, past legal rulings, the values of the people and society around it, and the consequences it observes from various other similar decision-making processes over time.
In short, robot cars, like humans, would use experience to invent their own reasons.
Asking others to make decisions for us, or leaving life to chance, is a form of drifting. But inventing and choosing our own reasons during hard times is referred to as building one’s character, taking a stand, taking responsibility for one’s own actions, defining who one is, and becoming the author of one’s own life.
Furthermore, as humans, we put our trust in such people.
No one with common sense would entrust their life, well-being, or money to a person who makes random decisions, asks others to decide everything for them when the going gets tough, or is drifting through life.
We trust others when we have insight into their values and decision-making process and know that they will stand up for those values—and it may take the same degree of understanding for us to trust machines.
Unfortunately, the general public is far-removed from understanding how artificial intelligence makes decisions. The creators of robot cars, drones and other “thinking” machines may have an incentive to keep this information private due to intellectual property concerns or security reasons. And many in the general public may find artificial intelligence inscrutable or may fear it is too difficult to understand.
In some cases, we might conclude robots can make better decisions than humans. So far, robot cars have the better track record—700,000 accident-free miles as of last April (and more by now). In fast-paced scenarios, humans are not well equipped to make hard decisions, often defaulting to instinct. In other cases, however, the risk may prove to be too high.
And we will be asked to make even tougher choices.
In a world where artificial intelligence may think, but not necessarily care if it is rewarded or punished for the type of decision it makes, we will need to develop new mechanisms beyond our current punishment-based justice system to keep the peace. And if there is a large power difference between humans and artificial intelligence, how one enforces laws becomes a pressing challenge.
As we face these hard decisions, we must do more than just drift along. We must decide what is most important to us and how we want to be the authors of our own lives in a world shared with robots. The question might not be if robots can make hard decisions, but if humans can.
Image Credit: Shutterstock.com