In the Wake of the Uber Crash, How Far Do Self-Driving Cars Still Have to Go?

Experts Report: Matthew Spenko, Associate Professor of Mechanical Engineering

March 30, 2018


In regards to the recent Uber crash and the fatality, and how it plays into public perception, I think it is a setback. My expertise around self-driving cars is mostly centered around a project that we’re doing for the National Science Foundation right now that’s looking to quantify one measure of safety and in particular, navigation safety.

So how well a mobile robot—that could be a self-driving car or any mobile robot, really—how well that robot knows its position, how well it trusts the sensor information to give an idea of where that robot exists in space. All these sensors—so laser range finders or lidars, or radars, GPS, and inertial navigation sensors—they all have noise on them and they all have faults that can occur to them.

So what we do is try and quantify how well that robot knows its pose in the presence of these potential faults. This becomes really important not when you have a single robot wandering around a street, but when you have millions of these robots. Then, all of the sudden, the possibility that these faults are going to cause an error, or a crash or any sort of fault, becomes a distinct possibility. 

So from our point of view we are very concerned with proving with mathematical analytical guarantees that these robots are safe, at least in terms of localization.