Google’s Robot Car Crash Is a Very Positive Sign

7,069 4 Loading

Reports released reveal that one of Google’s Gen-2 vehicles (the Lexus) has had a fender-bender (with a bus) with some responsibility assigned to the system. This is the first crash of this type — all other impacts have been reported as fairly clearly the fault of the other driver.

This crash ties into an upcoming article I will be writing about driving in places where everybody violates the rules. I just landed from a trip to India, which is one of the strongest examples of this sort of road system, far more chaotic than California, but it got me thinking a bit more about the problems.

Google is thinking about them too. Google reports it just recently started experimenting with new behaviors, in this case when making a right turn on a red light off a major street where the right lane is extra wide. In that situation it has become common behavior for cars to effectively create two lanes out of one, with a straight through group on the left, and right turners hugging the curb. The vehicle code would have there be only one lane, and the first person not turning would block everybody turning right, who would find it quite annoying. (In India, the lane markers are barely suggestions, and drivers — which consist of every width of vehicle you can imagine —  dynamically form their own patterns as needed.)

As such, Google wanted their car to be a good citizen and hug the right curb when doing a right turn. So they did, but found the way blocked by sandbags on a storm drain. So they had to “merge” back with the traffic in the left side of the lane. They did this when a bus was coming up on the left, and they made the assumption, as many would make, that the bus would yield and slow a bit to let them in. The bus did not, and the Google car hit it, but at very low speed. The Google car could have probably solved this with faster reflexes and a better read of the bus’ intent, and probably will in time, but more interesting is the question of what you expect of other drivers. The law doesn’t imagine this split lane or this “merge.” And of course the law doesn’t require people to slow down to let you in.

india-traffic-1But driving in so many cities requires constantly expecting the other guy to slow down and let you in. (In places like Indonesia, the rules actually give the right-of-way to the guy who cuts you off, because you can see him and he can’t easily see you, so it’s your job to slow. Of course, robocars see in 360 degrees, so no car has a better view of the situation.)

While some people like to imagine that important ethical questions for robocars revolve around choosing who to kill in an accident, that’s actually an extremely rare event. The real ethical issues revolve around this issue of how to drive when driving involves routinely breaking the law — not once in a 100 lifetimes, but once every minute. Or once every second, as is the case in India. To solve this problem, we must come up with a resolution, and we must eventually get the law to accept it the same way it accepts it for all the humans out there, who are almost never ticketed for these infractions.

So why is this a good thing? Because Google is starting to work on problems like these, and you need to solve these problems to drive even in orderly places like California. And yes, you are going to have some mistakes and some dings on the way there, and that’s a good thing, not a bad thing. Mistakes in negotiating who yields to who are very unlikely to involve injury, as long as you don’t involve things smaller than cars (such as pedestrians). Robocars will need to not always yield in a game of chicken, or they can’t survive on the roads.

In this case, Google says it learned that big vehicles are much less likely to yield. In addition, it sounds like the vehicle’s confusion over the sandbags probably made the bus driver decide the vehicle was stuck. It’s still unclear to me why the car wasn’t able to abort its merge when it saw the bus was not going to yield, since the description has the car sideswiping the bus, not the other way around.

Nobody wants accidents — and some will play this accident as more than it is — but neither do we want so much caution that we never learn these lessons.

It’s also a good reminder that even Google, though it is the clear leader in the space, still has lots of work to do. A lot of people I talk to imagine that the tech problems have all been solved and all that’s left is getting legal and public acceptance. There is great progress being made, but nobody should expect these cars to be perfect today. That’s why they run with safety drivers, and did even before the law demanded it. This time the safety driver also decided the bus would yield and so let the car try its merge. But expect more of this as time goes forward.

Brad Templeton is Singularity University's Networks and Computing Chair. This article was originally published on Brad's blog.

Image Credit: Travis Wise/FlickrCC

Brad Templeton

Brad Templeton

Brad Templeton is Singularity University's Networks and Computing Chair. He is a developer of and commentator on self-driving cars, software architect, board member of the Electronic Frontier Foundation, internet entrepreneur, futurist lecturer, writer and observer of cyberspace issues, hobby photographer, and an artist.
Brad Templeton

Discussion — 4 Responses

  • bobdc10 March 1, 2016 on 11:26 am

    That the bus driver misread the Google car implies that if the bus was driven by the same AI program as the car, no collision would have occurred. If all traffic were operated by and interconnected to the same AI program, the need for traffic signals, traffic laws, and lanes, would be eliminated and traffic would be controlled with minimum time to destination safely as the goal, while increasing efficiency of operation by minimizing stop and go traffic. The savings in time, fuel burn, maintenance, and insurance, will be huge.

  • rgrosssz March 1, 2016 on 2:24 pm

    One Day: #Googlecar =1 crash vs #Human driven car= 15,581 crashes- U Choose Your Driver! #AI #DeepLearning #HIMSS16

  • DSM March 1, 2016 on 3:00 pm

    “In this case, Google says it learned that big vehicles are much less likely to yield.”

    I don’t drive but even that seems self evident to me, and I have also noticed attitude difference in SUV drivers.

    Does the Google Car AI have a driver-car type attitude parameter for weighting the probability of a given outcome on ambiguous or novel choices?

  • almostvoid March 2, 2016 on 1:02 am

    The problem with car drivers and now AI pretending-cars is that buses carry people and though the can stop they don’t otherwise you’d have the hospitals full all the time. Anyway it was up to the car waiting to change lanes ‘when it is safe to do so’. Barging in is bad driving. [I drove buses in Sydney for 17 years]