Over the holidays, I went for a drive with a Tesla. With, not in, because the car was doing the driving.
Hearing about autonomous vehicles is one thing; experiencing it was something entirely different. When the parked Model S calmly drove itself out of the garage, I stood gaping in awe, completely mind-blown.
If this year’s Consumer Electronics Show is any indication, self-driving cars are zooming into our lives, fast and furious. Aspects of automation are already in use—Tesla’s Autopilot, for example, allows cars to control steering, braking and switching lanes. Elon Musk, CEO of Tesla, has gone so far as to pledge that by 2018, you will be able to summon your car from across the country—and it’ll drive itself to you.
Safety first
So far, the track record for autonomous vehicles has been fairly impressive. According to a report from the National Highway Traffic Safety Administration, Tesla’s crash rate dropped by about 40% after turning on their first-generation Autopilot system. This week, with the introduction of gen two to newer cars equipped with the necessary hardware, Musk is aiming to cut the number of accidents by another whopping 50 percent.
But when self-driving cars mess up, we take note. Last year, a Tesla vehicle slammed into a white truck while Autopilot was engaged—apparently confusing it with the bright, white sky—resulting in the company’s first fatality.
So think about this: would you entrust your life to a robotic machine?
For anyone to even start contemplating “yes,” the cars have to be remarkably safe— fully competent in day-to-day driving, and able to handle any emergency traffic throws their way.
Unfortunately, those edge cases also happen to be the hardest problems to solve.
How to train a self-driving car
To interact with the world, autonomous cars are equipped with a myriad of sensors. Google’s button-nosed Waymo car, for example, relies on GPS to broadly map out its surroundings, then further captures details using its cameras, radar and laser sensors.
These data are then fed into software that figures out what actions to take next.
As with any kind of learning, the more scenarios the software is exposed to, the better the self-driving car learns.
Getting that data is a two-step process: first, the car has to drive thousands of hours to record its surroundings, which are used as raw data to build 3D maps. That’s why Google has been steadily taking their cars out on field trips—some two million miles to date—with engineers babysitting the robocars to flag interesting data and potentially take over if needed.
This is followed by thousands of hours of “labeling”—that is, manually annotating the maps to point out roads, vehicles, pedestrians and other subjects. Only then can researchers feed the dataset, so-called “labeled data,” into the software for it to start learning the basics of a traffic scene.
The strategy works, but it’s agonizingly slow, tedious and the amount of experience that the cars get is limited. Since emergencies tend to fall into the category of unusual and unexpected, it may take millions of miles before the car encounters dangerous edge cases to test its software—and of course, put both car and human at risk.
Virtual reality for self-driving cars
An alternative, increasingly popular approach is to bring the world to the car.
Recently, Princeton researchers Ari Seff and Jianxiong Xiao realized that instead of manually collecting maps, they could tap into a readily available repertoire of open-sourced 3D maps such as Google Street View and OpenStreetMap. Although these maps are messy and in some cases can have bizarre distortions, they offer a vast amount of raw data that could be used to construct datasets for training autonomous vehicles.
Manually labeling that data is out of the question, so the team built a system that can automatically extract road features—for example, how many lanes there are, if there’s a bike lane, what the speed limit is and whether the road is a one-way street.
Using a powerful technique called deep learning, the team trained their AI on 150,000 Street View panoramas, until it could confidently discard artifacts and correctly label any given street attribute. The AI performed so well that it matched humans on a variety of labeling tasks, but at much faster speed.
“The automated labeling pipeline introduced here requires no human intervention, allowing it to scale with these large-scale databases and maps,” concluded the authors.
With further improvement, the system could take over the labor-intensive job of labeling data. In turn, more data means more learning for autonomous cars and potentially much faster progress.
“This would be a big win for self-driving technology,” says Dr. John Leonard, a professor specializing in mapping and automated driving at MIT.
Playing for labels
Other researchers are eschewing the real world altogether, instead turning to hyper-realistic gaming worlds such as Grand Theft Auto V.
For those not in the know, GTA V lets gamers drive around the convoluted roads of a city roughly one-fifth the size of Los Angeles. It’s an incredibly rich world —the game boasts 257 types of vehicles and 7 types of bikes that are all based on real-world models. The game also simulates half a dozen kinds of weather conditions, in all giving players access to a huge range of scenarios.
It’s a total data jackpot. And researchers are noticing.
In a study published in mid-2016, Intel Labs teamed up with German engineers to explore the possibility of mining GTA V for labeled data. By looking at any road scene in the game, their system learned to classify different objects in the road—cars, pedestrians, sidewalks and so on—thus generating huge amounts of labeled data that can then be fed to self-driving cars.
Of course, datasets extracted from games may not necessarily reflect the real world. So a team from the University of Michigan trained two algorithms to detect vehicles —one using data from GTA V, the other using real-world images—and pitted them against each other.
The result? The game-trained algorithm performed just as well as the one trained with real-life images, although it needed about 100 times more training data to reach the performance of the real-world algorithm—not a problem, since generating images in games is quick and easy.
But it’s not just about datasets. GTA V and other hyper-realistic virtual worlds also allow engineers to test their cars in uncommon but highly dangerous scenarios that they may one day encounter.
In virtual worlds, AIs can tackle a variety of traffic hazards—sliding on ice, hitting a wall, avoiding a deer—without worry. And if the cars learn how to deal with these edge cases in simulations, they may have a higher chance of surviving one in real life.
Democratized autonomy
So far, none of the above systems have been tested on physical self-driving cars.
But with the race towards full autonomy at breakneck speed, it’s easy to see companies incorporating these systems to give themselves an upper edge.
Perhaps more significant is that these virtual worlds represent a subtle shift towards the democratization of self-driving technology. Most of them are open-source, in that anyone can hop onboard to create and test their own AI solutions for autonomous cars.
And who knows, maybe the next big step towards full autonomy won’t be made inside Tesla, Waymo, or any other tech giant.
It could come from that smart kid next door.
Image Credit: Shutterstock