A Tesla blog post describes the first fatality involving a self-drive system. A Tesla was driving on autopilot down a divided highway. A truck taking a left turn off the highway crossed into the Tesla’s lane perpendicular to the car. A white truck body against a bright sky is not something the camera system in the Tesla perceives well. The truck’s clearance was high, so when the Tesla did not stop, it went under the truck and the windshield was the first part of the Tesla to hit the truck body, with fatal consequences for the “driver.”
Tesla points out that the autopilot system has driven 130 million miles, while human drivers in the US have a fatality about every 94 million miles (though it’s a longer interval on the highway). The Tesla is a “supervised” system where the driver is required to agree they are monitoring the system and will take control in the event of any problem, but this driver, a major Tesla fan named Joshua Brown, did not hit the brakes. As such, the fault for this accident may reside with Brown, although the investigation is just beginning. (Tesla also notes that had the front of the car hit the truck, the crumple zones and other safety systems would probably have saved the driver — hitting a high target is the worst case situation.)
Our condolences for the tragic loss https://t.co/zI2100zEGL
— Elon Musk (@elonmusk) June 30, 2016
It is worth noting that Brown was a major Tesla fan, and in fact, he is the person in a video that circulated in April claiming that autopilot saved him from a collision with a smaller truck that cut into a lane.
Any commentary here is preliminary until more facts are established, but here are my initial impressions:
- There has been much speculation of whether Tesla was taking too much risk by releasing autopilot so early, and such speculation will be boosted after this.
- In particular, a core issue is that the autopilot works too well, and I have seen reports from many Tesla drivers of them trusting it far more than they should. The autopilot is fine if used as Tesla directs, but the better it gets, the more it encourages people to over-trust it.
- Both Tesla stock and MobilEye stock were up yesterday, with a bit of a downturn after-hours. The market may not have absorbed this. The MobilEye is the vision sensor used by the Tesla to power the autopilot, and the failure to detect the truck in this situation is a not-unexpected result for the sensor.
- For years, I have frequently heard it said that “the first fatality with this technology will end it all, or set the industry back many years.” My estimation is that this will not happen.
- The truck was making a left turn, which is a not-unexpected situation, though if a truck turned with oncoming traffic it would be at fault.
- One report suggests that “friends” claim the driver often used his laptop while driving, and the truck driver claims he heard the car playing a Harry Potter movie after it crashed.
- Tesla’s claim of 130M miles is a bit misleading, because most of those miles actually were supervised by humans. So, that’s like reporting the record of student drivers with a driving instructor always there to take over. And indeed there are reports of many, many people taking over for the Tesla Autopilot, as Tesla says they should. So at best Tesla can claim that the supervised autopilot has a similar record to human drivers, ie., is no better than the humans on their own. Though one incident does not a driving record make.
- Whatever we judge about this, the ability of ordinary users to test systems, if they are well informed and understand what they are doing, is a useful one that will advance the field and give us better and safer cars faster. Just how to do this may require more discussion, but the idea of doing it is worthwhile.
Camera vs. Lidar and Maps
I have often written about the big question of cameras vs. LIDAR. Elon Musk is famously on record as being against LIDAR, when almost all robocar projects in the world rely on LIDAR. Current LIDARs are too expensive for production automobiles, but many companies, including Quanergy (where I am an advisor) are promising very low-cost LIDAR systems for future generations of vehicles.
Here there is a clear situation where LIDAR would have detected the truck. A white truck against the sky would be no issue at all for the LIDAR, it would see it very well. In fact, a big white target like that would be detected beyond the normal range of a typical LIDAR. That range is an issue here — most LIDARs would only detect other cars about 100m out, but a big white truck would be detected a fair bit further. Either way, that’s not quite far enough to stop in time for an obstacle like this at highway speeds, however, the car would brake to make the impact vastly less, and a clever car might even have had time to swerve or deliberately turn to hit the wheels of the truck rather than slide underneath the body.
Another sensor that is problematic here is radar. Radar would have seen this truck no problem, but since it was perpendicular to the travel of the car, it would not be moving away from or towards the car, and thus have the doppler speed signature of a stopped object. Radar is great because it tracks the speed of obstacles, but because there are so many stationary objects, most radars have to just disregard such signals — they can’t tell a stalled vehicle from a sign, bridge or berm. To help with that, a map of where all the radar reflectors are located can help. If you get a sudden bright radar return from a truck or car somewhere that the map says a big object is not present, that’s an immediate sign of trouble. (At the same time, it means that you don’t easily detect a stalled vehicle next to a bridge or sign.)
One solution to this is longer range LIDAR or higher resolution radar. Google has said it has developed longer range LIDAR. It is likely in this case that even regular range LIDAR, or radar and a good map, might have noticed the truck.
This article was first published on Brad’s blog. Go here to read the original article.
Image credit: Tesla