Elaine Herzberg's tragic death was inevitable and certainly won’t be the last. The reality is that cars kill people and although self-driving cars were built with the idea of reducing the number of automobile-related fatalities, it’s (still) not perfect.
Uber’s autonomous self-driving car failed to slow down and ended up killing the 49-year old from Tempe, Arizona as she walked her bike across the street.
As she suddenly emerged from the shadows, it’s easy to conclude that this collision was unavoidable regardless of whether the vehicle was human-driven or autonomous. However, self-driving cars were developed to avoid accidents exactly like this, in fact, it should have been pretty straightforward.
This makes me conclude that something wasn’t right on this occasion. It also makes me believe that autonomous vehicles will have to overcome a number of obstacles before they can be deemed to be safe for our public streets.
Check out a related article:
Let’s take a look at the top four challenges self-driving cars will have to overcome to become safe and ubiquitous.
1. Identify All the Bugs in the System
Regardless of how you look at it, Herzberg’s death was caused by bugs. These bugs can be anything from missing lines of code that could have been used to identify the victim and avoid her to erroneously discarded sensor data.
As it all runs on machine learning (ML) algorithms, a small error in the code can have devastating consequences.
While a lot of bugs can be identified and fiddled with through extensive testing, this alone won’t help the industry avoid another fatality. Crashes can also potentially occur if the system fails and the backup also fails along with all fail-safe protocols.
Another problem is the fact that although artificial intelligence (AI) and ML have come along way in recent years, smart cars and robots are lightyears away from thinking and perceiving the world like humans.
For example, Herzberg was moving perpendicular to the vehicle, so the sensors that were operating while the car was in motion at high speeds may not have been able to pick that up and process it. In this scenario, a human would have easily seen her and interpreted the situation correctly while the smart system was unable to do the same.
Check out a related article:
2. Need for Extensive Classification
AI and ML systems learn from enormous datasets of images of things like roads, lane lanes, pedestrians, cyclists, and even other vehicles before they’re able to correctly identify these objects on their own.
So if something is lacking in these datasets, then you risk the autonomous vehicle software misinterpreting objects like a human walking with a bicycle.
At present, we know that little things like shimmering exhaust or patches of tape on a stop sign can easily fool the system. Even natural weather phenomenon like a little bit of fog can fool the neural network.
This makes it important to engage in more classification to help autonomous systems to close to matching human capabilities.
3. The Co-Existence of Driver-Controlled and Driverless Vehicles
At some point, we’re going to have a considerable number of self-driven cars and driver-controlled cars on the road at the same time. This can create a lot of problem for a number of different reasons.
For example, driver’s etiquette is different from country to country, so when you put two people from contrasting cultures on the same road, you’re bound to run into problems. It’s the same when we share the same road with autonomous vehicles.
This makes it important for ML activities to focus on the driving cultures of individual countries because it can be unpredictable when you add humans into the mix.
However, when it comes to cultural melting pots like New York City or Chicago, it can be quite challenging to address all the different variables and technology alone can’t achieve this. It’ll also be important to educate the humans driving these cars just like autonomous cars.
Humans will also be forced to adapt and develop some sort of universal language when it comes to signaling and so on. But this also brings up questions about how self-driving cars will respond to human hand signals, traffic-light failures, and natural weather phenomenon (like standing water or snow that can impact the sensors).
4. Security & Regulations
There are always going to be nefarious organizations and people who will attempt to breach connected cars to create as much damage as possible. This has made security a primary concern in the industry and will remain a priority as the industry evolves, but there’s rarely an impenetrable solution.
This makes it important for the autonomous vehicle industry to ask questions and figure out the best way to respond to it. For example, during an active security breach, what protocols can we implement to effectively minimize the risk of a crash?
At the same time, the first fatality can also lead to more regulations and restrictions. The governor of Arizona, Doug Ducey, took advantage of lax regulations to attract Uber to the state, but now all of Uber’s self-driving cars have been suspended.
We will probably see the same around the country and around the world as more fatalities occur and this can potentially cripple the whole industry.
This makes it important for both autonomous vehicle manufacturers and governments to come together and explore ways that can minimize potential fatalities while helping the industry grow.
For now, connected cars are here to stay, even with the very real potential for more fatalities (which I think is inevitable). However, the self-driving car industry is still in its infancy and can only get better with more research, testing, and technological advancement.