This Is What Happens When We Debate Ethics in Front of Superintelligent AI

Is there a uniform set of moral laws, and if so, can we teach artificial intelligence those laws to keep it from harming us? This is the question explored in an original short film recently released by The Guardian.

In the film, the creators of an AI with general intelligence call in a moral philosopher to help them establish a set of moral guidelines for the AI to learn and follow—which proves to be no easy task.

Complex moral dilemmas often don’t have a clear-cut answer, and humans haven’t yet been able to translate ethics into a set of unambiguous rules. It’s questionable whether such a set of rules can even exist, as ethical problems often involve weighing factors against one another and seeing the situation from different angles.

So how are we going to teach the rules of ethics to artificial intelligence, and by doing so, avoid having AI ultimately do us great harm or even destroy us? This may seem like a theme from science fiction, yet it’s become a matter of mainstream debate in recent years.

OpenAI, for example, was funded with a billion dollars in late 2015 to learn how to build safe and beneficial AI. And earlier this year, AI experts convened in Asilomar, California to debate best practices for building beneficial AI.

Concerns have been voiced about AI being racist or sexist, reflecting human bias in a way we didn’t intend it to—but it can only learn from the data available, which in many cases is very human.

As much as the engineers in the film insist ethics can be “solved” and there must be a “definitive set of moral laws,” the philosopher argues that such a set of laws is impossible, because “ethics requires interpretation.”

There’s a sense of urgency to the conversation, and with good reason—all the while, the AI is listening and adjusting its algorithm. One of the most difficult to comprehend—yet most crucial—features of computing and AI is the speed at which it’s improving, and the sense that progress will continue to accelerate. As one of the engineers in the film puts it, “The intelligence explosion will be faster than we can imagine.”

Futurists like Ray Kurzweil predict this intelligence explosion will lead to the singularity—a moment when computers, advancing their own intelligence in an accelerating cycle of improvements, far surpass all human intelligence. The questions both in the film and among leading AI experts are what that moment will look like for humanity, and what we can do to ensure artificial superintelligence benefits rather than harms us.

The engineers and philosopher in the film are mortified when the AI offers to “act just like humans have always acted.” The AI’s idea to instead learn only from history’s religious leaders is met with even more anxiety. If artificial intelligence is going to become smarter than us, we also want it to be morally better than us. Or as the philosopher in the film so concisely puts it: “We can’t rely on humanity to provide a model for humanity. That goes without saying.”

If we’re unable to teach ethics to an AI, it will end up teaching itself, and what will happen then? It just may decide we humans can’t handle the awesome power we’ve bestowed on it, and it will take off—or take over.

Image Credit: The Guardian/YouTube

Vanessa Bates Ramirez
Vanessa Bates Ramirez
Vanessa is senior editor of Singularity Hub. She's interested in biotechnology and genetic engineering, the nitty-gritty of the renewable energy transition, the roles technology and science play in geopolitics and international development, and countless other topics.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured