Home Menu Search

Another Self-Driving Car Accident, Another AI Development lesson

16 November 2019

This accident actually happened about one and half year ago, it happened on a Uber’s self-driving car, and it took one women’s life. This is a very serious reminder to all our fellow AI/ML/DS researchers and practitioners that the work we do carry a lot of weights, sometimes other people’s life.

uber self-driving accident

A Flash Back of What Happened

This terrible accident happened on March 19, 2018, late in the night. An Uber self-driving car, running in autonomous mode with a safety driver behind its wheel, hit and killed a woman in Tempe, Arizona. The ultimate reason is still under investigation, but some facts leaked out can already shed some light on what went wrong. From the dash-cam and internal driver-seat camera footages, the accident happened on a poorly lit road with a speed limit of 40 mph. The safety driver was watching her cellphone right before the car hit the woman. According to the driving records extracted by Uber, the algorithm classified the women as ‘car’, then ‘bike’ and then ‘other’ during the process and the indecisiveness led to no action which eventually led to the tragedy. Neither the Lidar or Radar sensor triggered nor the paid safety driver picked up the pedestrian. Either one of the above measures, if worked, could have saved the woman’s life. Although the full investigation report is still not released. I think we can give it a bit spin from a Data Science point of view on what went wrong and what flaws were in the whole self-driving system that caused this tragedy.

Possible Flaws of the Self-Driving System

Before we dive deep into the potential flaws of a self-driving system, we need to acknowledge that self-driving car is actually the most-advanced application of AI that’s is the closest to AGI(Artificial General Intelligence). Driving is a very complex and potentially quite dangerous act. The environment a self-driving car has to deal with could be very complicated that require all kinds of situation awareness: other cars, pedestrians, bikes, traffic signals, signs, weather, road situation, etc. It’s true that we’ve made great progress on artificial intelligence these years, but are we good enough for this task? No matter the answer, we need to do all we can and explore all the possibilities we are aware of, no stone unturned. This leads to the first and most critical flaw of the system, our research and development team.

Applied Artificial Intelligence Development Teams Sometimes Are Under-staffed with Engineers

I’m not saying you cannot get into AI/ML/DS area if you don’t have a Ph.D. or Master's degree, but the statistics show that the majority of the data scientists in the industry at least have a master’s degree. A big part of them has a Ph.D. This is what it should be because Artificial Intelligence and Data Science are not a trivial area, it required many years of training on math, computer science and a wide variety of technologies. This has been a consensus in the industry. A consensus so strong, sometimes people forgot other parts of the equally important roles to make a real-life AI project such as self-driving car successful. Most importantly engineering. If what we want is to prove the performance of an algorithm for some single-purpose task ( like Radiology Image Recognition), we don’t need much engineering power, a solid Data Scientist will probably do the job very well. If we want to develop and deploy a machine learning application on the web to analyze comments sentiment, then you might want to hire more solid developers and DevOps engineers to make sure the application is well structured, carefully coded and easily maintainable. So what if you want to build a self-driving car system that needs to sustain many hours without any incident in the real world? You might want to hire car designers, regulatory specialists, car safety experts, physicists, and some top-notch data scientists to create a diversified team so the task can be properly tackled. For this Uber self-driving car incident, one reason is that neither of the Lidar and Radar systems picked up the woman crossing the street. What is the reason for that? Are there any design flaws in the placement of the sensors? Are there any interference issues in the environment? Does the communication channel between sensors and the central computer work smoothly? I’m not saying these are the exact cases, I’m just saying these are the question need to be asked and addressed and the best type of talents to address these issues are engineers, not data scientists.

... Read the full article here.

Final Thoughts

No matter how much progress we’ve made on self-driving cars, sometimes we felt like we are just scratching the surface and we have no idea how big an iceberg lurks beneath the water. Also, the self-driving car accident usually gets high media attention. According to Wired, nearly 40,000 people died in road incidents last year in the US alone, but very few (if any) made headlines the way the Uber incident did. Unfair? No really. This is actually a good thing. We need strict and close scrutiny on how safe we can make it work. Because it’s human lives that are on the line. And we can’t be too careful and thorough.