Ray Kurzweil Explores How Self-Driving Cars Will Choose Between Life or Death

0
11147

Driving a motor vehicle requires making tough choices in the heat of the moment. Whether slamming on the brakes in traffic or speeding up before a light turns red, split-second decisions are often a choice between the lesser of two evils. Sometimes, a choice could lead to bodily injury or even a loss of life.

As more self-driving cars reach the road, life-and-death decisions once made by humans alone will increasingly shift to machines. Yet the idea of giving that responsibility over to a computer may be unsettling to some.

Self-driving cars have the potential to significantly reduce the tens of thousands of auto fatalities occurring yearly—but a reduction isn’t the same as elimination. In fact, some deaths will inevitably happen at the hands of computer algorithms once they make those decisions for us.

So how will engineers program self-driving vehicles to respond in such situations? Will a “morality” of sorts become a necessity for any AI-empowered machine?

On this subject, Ray Kurzweil says the moral discussion is important, yet subtle, necessary, and deserving of careful consideration. And this applies to any technology leveraging AI and the ethics of removing humans from the loop.


For years, Ray Kurzweil has been giving fireside chats at Singularity University. Now, some of his best questions and answers will be released every Thursday on Singularity University’s Ray K Q&A YouTube channel. Check back each week for the latest video.

Image Credit: Shutterstock