Are Google’s 11 Driverless Car Accidents Scary—or Really Impressive?

If you drive enough, chances are you’ll be involved in an accident. For the best drivers, perhaps it’s more likely than not the other car’s at fault. But most of us will have occasional lapses. Backing into another car or rear-ending someone? It happens. Unless you’re a robot. Or so the argument for driverless cars goes.

Some 94% of accidents are attributed to human error. Remove the human, and drastically reduce accidents. It isn’t terribly surprising, then, that there’s been a bit of an uproar since the Associated Press reported California’s driverless car record isn’t squeaky clean. There have been several accidents of late.

Of the 48 driverless cars in California, four were in accidents since September. How do we know? That’s when the California Department of Motor Vehicles (DMV) began issuing permits to firms testing driverless technology and requiring they report accidents. That is, they’re now part of the public record.

Google self-driving car waiting for a light in Mountain View, California.
One of Google’s self-driving cars waits for a light in Mountain View, California.

Google’s driverless Lexus SUVs were involved in three of the accidents. The other accident involved one of auto parts supplier Delphi Automive’s two driverless Audis.

Notably, the driverless vehicles didn’t cause any of them.

Of course, the DMV requirements go back eight months, but Google’s cars have been on the road for six years. What’s their record over the rest of that period? Chris Urmson, director Google’s self-driving car program, addressed the issue in a recent post on Medium.

“Over the 6 years since we started the project, we’ve been involved in 11 minor accidents (light damage, no injuries) during those 1.7 million miles of autonomous and manual driving with our safety drivers behind the wheel, and not once was the self-driving car the cause of the accident.”

At first blush, that’s a pretty good. They’ve been hit by human drivers in driverless mode, not vice versa.

But Google hasn’t released records of each incident, and the DMV isn’t allowed to give out details. So, we’ve got to take Google’s word on their cars’ culpability in accidents. (Delphi sent the AP a detailed report of its accident, in which another car broadsided its self-driving car while being operated by a test driver.)

The rational human has to be skeptical when the only source of information you have is a prime maker and proselytizer of the technology in question. Whether the info is or isn’t unimpeachable—at minimum, the incentive to whitewash things exists, and a third party report of what happened would strengthen the case.

Further, according to Slate, a possible reason Google’s cars haven’t caused a single accident is that the team’s test drivers are instructed to take control of the car in situations they deem dangerous or risky. This is the responsible thing to do, given the tech is experimental and the fact it’s on public roads.

But it’s also less surprising, then, that the driverless cars have a clean record. They often aren’t in charge in the toughest situations. (Which is part of the point of driving on public roads—to learn what those situations are, anticipate them in the car’s programming, and ultimately navigate them autonomously.)

Google self-driving Prius navigates an obstacle course.
Google self-driving Prius navigates an obstacle course.

Google says that in such situations the team simulates how the car would have acted had the driver left it to its own devices. The simulations indicate the car would have avoided causing an accident every time. Though again, Google’s simulations and Google’s account.

It’s easy to get into the weeds on this issue. But there’s another side that, in the heat of the moment, is also easy to ignore or forget. Google’s cars have racked up a lot of miles, they’re still prototypes, and human drivers aren’t exactly images of perfection.

It’s a little naïve to assume no accidents would occur in a million miles of fully autonomous driving. The 140,000 miles the fleet has driven since September is equivalent to 15 years of driving. Which makes the total over a century of driving. To traverse that stretch with no accidents on a road full of human drivers? Unlikely. Indeed, Google is compiling a fair bit of data detailing just how bad most of us are at driving.

“Our safety drivers routinely see people weaving in and out of their lanes,” Urmson writes. “We’ve spotted people reading books, and even one playing a trumpet.” (Somehow, sadly, this isn’t so shocking.)

Urmson’s post details a number of situations in which their car is already quite capable of avoiding accidents with silly humans. He writes that it’s quite common for their cars to encounter another car driving the wrong direction on a one way street or to get cut-off while taking a left at an intersection.

And it’s a key point worth reemphasizing.

Two cars (green) drive the wrong way on a one-way street past a Google car (grey).
Two cars (purple) drive the wrong way on a one-way street past a Google car (grey).

We can program driverless cars to avoid the dumbest things human drivers do. Once programmed, they won’t forget or be diverted. They won’t fiddle with the radio or their smartphone. And they won’t drive drunk—because they can’t get drunk. They’ll beat us in attention, vision, and spatial awareness every time.

It’s reasonable to worry about a machine’s glitches, but unreasonable to ignore or downplay our own.

“With 360 degree visibility and 100% attention out in all directions at all times; our newest sensors can keep track of other vehicles, cyclists, and pedestrians out to a distance of nearly two football fields.”

Better than human? Clearly. Still, it’s also important to compare how those sensors and systems perform relative to the rest of us. Without a benchmark, how can you know how much progress you’re making?

The AP notes that the NHTSA average for “property-damage-only crashes” is about 0.3 per every 100,000 miles driven, and Google’s 11 accidents in 1.7 million total miles works out to about 0.6 per 100,000. How should we interpret these numbers? For one, as Urmson argues, the real statistic is hard to pin down as so many accidents go unreported (perhaps as many as five million). But forget that for a moment.

From 10,000 feet: Google’s prototype driverless cars are already pretty close to human averages.

That’s less scary, more impressive. No one is talking about replacing steering wheels, brakes, and gas anytime soon. How likely is it driverless cars will improve their performance? It won’t be easy (especially in cities). But with better sensors, vehicle-to-vehicle communications, and machine learning—it isn’t unlikely.

If they’re near human averages now, it’s possible they can hit superhuman averages in the future.

Accidents were bound to happen and bound to kick up controversy. This is the beginning of the inevitable conversation about safety and trust. We should require transparency before we put ourselves and our loved ones into a driverless car. And the same holds for when those cars are being tested on public roads.

But we should think twice before we set the bar so high (in fact, much higher than we set it for human drivers) so early that we impede the technology before it’s out of the gate—especially an invention that, among other benefits, promises to ease a significant source of human injury and suffering.

Image Credit: Steve Jurvetson/Wikimedia Commons, Google

Jason Dorrier
Jason Dorrier
Jason is editorial director of Singularity Hub. He researched and wrote about finance and economics before moving on to science and technology. He's curious about pretty much everything, but especially loves learning about and sharing big ideas and advances in artificial intelligence, computing, robotics, biotech, neuroscience, and space.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured