Google Self-Driving Cars Are Learning to Navigate the Urban Jungle

91 29 Loading

google-self-driving-car-city 1

Mercedes, BMW, Toyota, Nissan, Audi—major car companies say they’re working to make cars drive themselves. But all are lagging behind Google. The internet company recently said its self-driving fleet has now logged 700,000 miles on public roads.

The last update came in 2012 when the fleet hit 300,000 miles on highways. But the goal isn’t to automate just highway miles—Google wants to engineer a door-to-door robot chauffeur. Therefore, doubling the fleet’s logged mileage is less important than the fact that the Google car team says they’re now laser focused on urban driving.

Highway driving is a comparatively simple problem. Speed and spacing is more consistent and cars are usually the only thing on the road. The number of predictable urban obstacles, on the other hand, is long—stop signs, pedestrians, traffic lights, construction zones, etc.—and novel, low-probability situations are very nearly endless.

City driving includes widely varying speeds, frequent lane changes, subtle shifts within the same lane, dodging erratic pedestrians and bikers, communicating with other drivers using gestures or eye contact. Lacking a robot car that can learn and adapt to new situations, Google has to catalogue and code for as much as they can.

In one of the first in-depth glimpses granted the press, Atlantic reporter Eric Jaffe described how Google’s teaching its cars to drive.

They have a human observer in every vehicle. The observer flags and relays incidents to software developers who then review the car’s performance and fine tune the code for better behavior in already programmed instances or write new commands for new situations. That’s a laborious process. And it may never account for every eventuality.

“It’s the rarer and rarer situations we’re working towards,” Google car project lead, Chris Urmson, told Jaffe. “The complexity of the problem is substantially harder.”

That comment echoes something Tesla CEO Elon Musk said last year. Teslas, he thinks, can be 90% automated by 2016, but that last 10% will be a bear. Even so, increasing complexity included, Urmson says over the years they’ve come to believe their goal of a completely self-driving car is possible.

But when might such a car be available to ordinary folk? Probably not for awhile yet.

A fully formed system will take more experimental work. Further, the Google car relies on detailed maps—maps that don’t yet exist for any city outside Mountain View. Rules and regulations will have to catch up too. And then there’s cost. The Google car’s self-driving hardware costs six figures, including $70,000 for the ungainly rotating lidar atop the cars.

google-self-driving-lexus

Such obstacles may push self-driving cars down the road, but none are deal breakers. Expect rules and regulations (already underway in several states) to allow robot cars inside the next decade.

Meanwhile, Google’s shown itself capable of vast maps data projects (Google Maps). And emerging technology is usually expensive (and bulky) in the research phase (think mobile phones). First comes functionality then the focus shifts to slimming it down and cutting cost.

When commercial self-driving technology matures, some believe it will turn an unprepared auto industry on its head.

In a recent blog post, Singularity University’s Brad Templeton (and consultant to the Google car team) said, “The past history of high-tech disruptions shows that very few of the incumbent leaders make it through to the other side. If I were one of the car makers who doesn’t even have a serious project on this, I would be very afraid right now.”

Templeton further notes that though Mercedes is furthest along of the major carmakers, its systems are still less capable than the Google car—in 2010.

But why bother with robot cars at all? In part, it’s a matter of productivity. A self-driving car may give you back the average 4% of your life spent commuting. You’ll be free to do other things on the road. Talk, text, or video. Go for it.

But the most often cited benefit is safety. Ormson clearly has those numbers on autodial. Every year, 33,000 people die on US roads, he told Jaffe. Automobile accidents are the leading cause of death for people age 4 to 34. And the kicker, at least 90% of all accidents are due to human error. Given the ability, why wouldn’t we automate?

Image Credit: Google, Steve Jurvetson

Discussion — 29 Responses

  • seemsArtless May 13, 2014 on 9:20 am

    Are there any companies focusing on self-driving tractor-trailers when they are on limited access highways? Seems like a reasonable first step, with obvious safety benefits, and likely fuel efficiency benefits – maybe the truck could even slow down 5 or 10 km/hr to save fuel if it knows it will just have to wait for a loading spot at its destination.

  • s1974 May 13, 2014 on 11:59 am

    While I believe we have technology capable of handling a self-driving car, I do not believe programmers have the professionalism required to design the software. For far too long software programmers have had the luxury of hiding inside a bubble insulating them from taking responsibility for their mistakes, and aside from public school teachers I can’t think of any other profession where this is the norm. When most software fails you can either find ways of working around the failure or correct them afterwards, but there is no way to undo a traffic violation, getting stuck in a ditch, or a death.

    Programmers have been hiding behind EULAs, non-existent customer support numbers, and ignored tech support emails for decades. I still remember the cheers from the audience when Bill Gates showed off Windows 98 by connecting a scanner and it crashed: even people in the industry inherently know how little this field cares about reliability! If a self-driving car needs drive on rt 202 and a bug causes it to drive at 202MPH, no judge in the world is going to accept the “blame the user” defense programmers arrogantly throw about today, nor will the legal system tolerate the industry’s circle-jerk of responsibility where Google will inevitably blame the car manufacturer and the car manufacturer will inevitably blame Google. The current status quo simply will not work, and the tech companies behind the tech be stunned when they are faced with that first stiff legal settlement.

    With a self-driving car software developers WILL finally themselves financially responsible for their failure to write solid, reliable, and clean code. Will they be able to rise to this challenge? I don’t think so. While I see the tech being possible for proof-of-concept situations I do not foresee a time when programmers will be ready to write the code. It’s a shame because this would be a really amazing feat, but I certainly wouldn’t want to drive on a road with self-driving cars knowing how bad programmers are at writing reliable code.

    • Homer s1974 May 13, 2014 on 1:50 pm

      I would agree with you if the only examples of software were PC operating systems and word processors. But there are existing industries where programmers are held to high, rigid standards. In the field of radiation oncology, for example, the treatment-planning software must be FDA approved. It has to be both user friendly (since the users are non-techie clinicians) and as close to error free as possible. Otherwise, patients will receive incorrect radiation doses. Rad onc clinics have expensive maintenance contracts with the software vendors, who pay very close attention to support and patches. Any bugs must be investigated and eliminated ASAP. Blaming the user is simply not an option (unless the software truly is being misused).

      Presumably, self-driving cars will also be heavily regulated. And if the medical industry is any guide, this will elevate the quality of their software and tech support.

      • Andrew Atkin Homer May 13, 2014 on 3:18 pm

        @ s1974 + Homer: Good comments.

        The advantage of driverless cars is they can operate as a network-based system. We need the technology to be good enough so that a car can empty-send itself to the next customer – so we only use the car we need, for the trip at hand.

        Also, a small commuter car (like the iRoad) could be a good first-step contender because it’s light and small….it shouldn’t take too long for a car like that to empty-send to someone with acceptable safety – and that’s all you need to start a revolution. Also think of micro-cars delivering any odd item to your home. That functionality alone will turn the world of retail and sales on its head.

        • izumi3682 Andrew Atkin May 13, 2014 on 4:27 pm

          The idea of having an autonomous vehicle to pick you up, deliver you to your destination, pick you up again and return you to your home sounds good in theory. But. I don’t want to get into an autonomous vehicle that is slathered, smeared, soaked, spattered, spotted or other wise covered in someone ELSE’S urine, vomit, blood, fecal matter, sweat, sexual activity related horrors, funk, lack of hygiene odor, BO (oh, it’s two different things, trust me…) perfume, cologne or miasma. That’s mainly I think, why we have our own POVs. (That’s where I deposit all MY nastiness… ;)

          • Andrew Atkin izumi3682 May 14, 2014 on 3:29 pm

            @Izumi: You have an account with the system. You log on biometrically. Internal camera’s take before and after shots of the vehicle (from your usage). If it’s in bad order you can refuse to get in, order up another car, and the vehicle you dismissed will be investigated. The prior user may be accountable.

            Accountability is there. The cars will be respected because people will not want to be disqualified from the system. Problem solved.

        • s1974 Andrew Atkin May 13, 2014 on 10:30 pm

          @Andrew : I agree that the potential for these automated vehicles could be staggering especially in your example because the human beings required to operate these shipping vehicles is probably the biggest expense of shipping (But take *my* truck? Out of my cold dead…uh… driveway) It would also be a massive help the handicapped who don’t always have access to a driver or want to remain independent.

          Unfortunately with my experience with the reliability of software, it’s going to take a massive campaign on the part of these large software companies to prove that they do have the professionalism to create a car that can even drive as reliably as a car operated by a SMS-addicted 17 year old girl. These fears are quite legitimate.

          Even with massive government testing I think these existing companies have internal corporate cultures that do not embrace what is needed (maybe the environment is different in one that designs code for medical devices). Take Microsoft as an example, they released the xbox 360 in 2005 with an enormous design flaw but instead of fixing the problem they were content with repeatedly replacing devices through the warranty replacement program. While this was more of a hardware design flaw and not software, it still proved that maintaining their ingrained culture of indifference was more valuable than the billions (from what I read) wasted on the warranty program.

          Unlike a poorly designed video game console, or the iPhone iMessage system that will hijack your messages after switching to an Android, self-driving cars with shoddy software are likely to injure or kill people who choose not to own one.

      • s1974 Homer May 13, 2014 on 10:07 pm

        I’ve been thinking about your example and it may be the exception that proves the rule, but I can’t help but wonder how many errors in your example could go without being noticed. I’ve know people who’ve had radiation therapy and it was a long series of treatments so if a series had a single failure would anyone notice so long as there wasn’t an obvious misplaced burn? There’s no guarantees with radiation therapy so it could simply be dismissed as a failure of the treatment itself.

        Your example did bring up a situation I experienced in 2003 with a highly automated procedure intended to be performed in a single treatment: blade-free LASIK. My biggest concern was that the machines would fail to do what the doctor told it to do and not that the treatment would actually work! Although I had to have the right eye redone later that year, I believe that was a result of the treatment not working not a result of the machines failing. My experience with Windows had me worried that the machine would bluescreen halfway through treating my eye.

        I definitely agree that the user can often be blatantly to blame, and maybe that level of testing would result in safe cars but I just fear the thought of Google releasing a software update that ends up being incompatible with a certain onboard computer the same way Apple will release an update to iOS and suddenly people can’t connect their iPhones to their car radio anymore.

        • Andrew Atkin s1974 May 14, 2014 on 3:31 pm

          If you start off with micro deliver cars about the size of a vacuum cleaner, then they can essentially afford to crash, worst case scenario….masses of them would allow for rapid, intensive software development/testing in any circumstance.

          • Quantium Andrew Atkin May 14, 2014 on 11:53 pm

            What about the accidents these micro-delivery cars may cause if their software is up to the job? Would insurers cover the companies operating them?

          • s1974 Andrew Atkin May 16, 2014 on 11:46 am

            LOL. I’m imaging people stopping on the highway to get out and carry one of these vacuum cleaner-sized cars to the other side of the road as if it was a turtle.

    • James Scott Tayler s1974 May 14, 2014 on 2:37 am

      What are you even talking about? In terms of commercially available software, yes you are quite right, but when it comes to mission critical systems there are standards and quality controls that are adhered to. Modern cars have up to 70 on-board computers and are mode code than they are car. Millions of lines of code. Think about the amount of software that it takes to run an aeroplane for instance. So, programmers are already tasked with writing code that must perform to an expected level and they seem to be doing fine. For sure mistakes have been made in the past (challenger space shuttle) and these mistakes have been learned from and processes improved upon. You would be surprised at the amount of software you trust your life to every single day.

      • s1974 James Scott Tayler May 16, 2014 on 11:56 am

        What I’m talking about is decades of exposure to poorly written or thought out software that is inadequately supported by the parties responsible for it. I am forced to come to this conclusion based upon the fact that this is how the software industry overwhelmingly behaves. As for the software required to operate an airplane, keep in mind that while the machine is more complex the process of getting to point A to point B is not because driving requires constant diligence with other drivers, road hazards, traffic patterns, etc causing issues that require split-second decisions. For example a cruising plane may be in a theoretically more complex 3D environment but it is not going to find a deer running across its path the way a car cruising on the interstate. I acknowledge that there are a number of forms of software I have to depend on in a given day, but it is highly naive to suggest that the software industry does not have an issue with a lack of professionalism with standing behind their work.

  • Quantium May 13, 2014 on 2:20 pm

    At present minor human driving errors are dealt with by a police man pulling the driver over and issuing a fixed penalty fine.

    An alternative may be in-car software similar to that which is being used for the self driving car. It would be designed to provide a running tutorial on driving like an advanced motoring instructor. Such a project could meet with massive disapproval by the public, but so did seat belts and laws about drivers’ alcohol consumption. Now most people regard these as sensible.

    If software could turn every driver into an advanced motorist, lives would also be saved. It would also provide a massive test bed for the fully automated car of the future.

    • s1974 Quantium May 16, 2014 on 12:02 pm

      The problem will be situations where the car ignores the traffic laws, a penalty gets imposed (the process is irrelevant), and the owner of the car will want reimbursed for the car’s failure to abide by the laws. Knowing the industry as it exists now, if the owner can get past the tech “circle jerk” (i.e. the car manufacturer will blame Google, Google will blame the manufacturer) the result will be that the EULA will exempt Google from responsibility because it will require the owner of the car oversee the operation of the vehicle at all times. If that’s the case and the owner of one of these things will need to oversee every moment of the drive, then what’s the point of a self-driving car?
      Sometimes it is pretty easy to see the future.

      • Andrew Atkin s1974 May 16, 2014 on 12:30 pm

        Private contracts will be worked through and formed between service providers and manufacturers and consumers. These issues are cosmetic. So is the software issue – it will not hold anything back for any substantial length of time.

  • Joy Cruz Mahusay May 13, 2014 on 10:32 pm

    we