The US Department of Transportation (DOT) is conducting a 12-month, $25 million study to see if cars sending data to each other over Wi-Fi can make driving safer. Cars talking to each other and maybe braking or swerving to avoid collisions? Very cool. Spending $25 million on a relatively limited test? Less cool. But we’ll get to that momentarily.
The test, funded by the US Department of Transportation links 3,000 volunteer cars in Ann Arbor, Michigan over Wi-Fi. The cars talk to each other and infrastructure built into portions of the road. It is the biggest “road test” of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication ever conducted. Put another way, it’s the only such large scale test.
V2V and V2I warns drivers if another similarly equipped car infringes on their lane, or runs a red light, or stops suddenly ahead of them.
DOT believes such warnings could prevent “four out of five unimpaired vehicle crashes.” And further, they hope it may eventually provide real time traffic monitoring, allowing commuters to choose the path of least resistance.
According to DOT Secretary Ray Lahood, “This cutting-edge technology offers real promise for improving both the safety and efficiency of our roads.”
It’s a good idea that deserves study. (Almost as good as these Dutch glow-in-the-dark smart highway designs.) But there are a few details worth a raised eyebrow.
Take for example, the $25 million in funding. For 3,000 devices, we’re talking over $8,000 per system. You might expect that price tag on a revolutionary new bit of tech.
Maybe a system that takes control of the car when the data warns of an impending collision. Or even something that drives the car all the time. (Which, by the way, is awesome and already happening. You can read about it here and here.)
Whatever is being installed in these cars, it’s not that. Undoubtedly, part of the cost is because the equipment is hooked into the car’s systems to monitor them directly. But otherwise, V2V will talk to other cars over Wi-Fi and warn drivers of detected hazards.
And that’s about it.
Thing is, smartphones already do this. They have GPS and an accelerometer; they are user-programmable; and many people have them. Think of how broad (and cheap) a test you could run with smartphones, a clever app, and a special wired connection to the car’s computer.
There’s reason to be skeptical of the study’s effectiveness given its scale too. The Department of Transportation (DOT) is billing it as the “largest-ever road test of connected vehicle crash avoidance technology.”
But large is relative. 3,000 cars will be running the system in Ann Arbor. How does that compare to all the cars on the road?
Ann Arbor has a population of 114,925. On average, 43.9% of Americans own a car. That implies Ann Arbor has something like 50,452 cars. Probably not exact—and not counting out-of-towners—but close enough.
Under those conditions, the study will equip six out of every hundred cars on the road. Given the average rate of accidents among licensed drivers is 8%, what’s the probability two cars equipped with these devices meet in an “almost” accident?
Even if it’s not zero, a handful of data points in a year aren’t enough to make conclusive statements worthy of $25 million.
And one more thing about traffic monitoring. To make that at all effective, you need way more cars in the study. But why bother when Google Maps already has it covered?
Again, this is a good idea—but for a startup or other private firm. For an organization answerable to investors, passionate about making the best possible technology, and realistically bringing it to market.
And there are organizations out there doing this research already.
Google’s self-driving cars may not converse with each other—but do they need to if they are equipped with radar? Probably not. And ostensibly, that could be a more elegant solution because you don’t have to compel every car on the road to adopt the technology and you don’t have to outfit all the roads with sensors.
Cars talking to each other on a dedicated Wi-Fi network is potentially a great idea. But allocating $1 million each to 25 plausible ideas seems more productive than allocating $25 million to just one plausible idea.