Advertisement

Uber AV accident highlights divergent expectations for AI

A fatal accident in which an Uber autonomous vehicle in automatic mode struck and killed a pedestrian shows that expectations of AI are inflated

By Richard Quinnell, editor-in-chief

A few days ago, the worst fears about autonomous vehicle (AV) technology were realized. An AV that Uber was testing on the streets of Tempe, Arizona, struck and killed a pedestrian. That accident is still being investigated, so it’s too early to tell what exactly happened and why neither the AI nor the human backup safety driver prevented the collision. But one thing is clear: Expectations for what the AI can and should do are all over the map.

The premise behind AV research is that with AI guiding vehicles, accident rates will go down dramatically, saving lives, and that the technology has evolved to the point that real-life field testing is appropriate. The opposition is unwilling to tolerate AVs on public roads until exhaustive testing has shown beyond doubt that the AI will handle all possible situations far better than the best human driver (and, even then, probably still won’t trust them over humans). This accident appears to cast doubt on the one viewpoint while bolstering the other and certainly has fueled the debate.

Uber has, rightly, suspended all of its AV field testing for now. Once the investigation is complete, however, and full understanding of what happened and why is reached, what should be the next steps? Will Uber and others resume field testing as before? Will lawmakers and the public want additional restrictions and regulations to be put into place to enhance public safety and require further off-road testing before allowing further field testing on public roads? Or will the whole concept of AVs simply be shelved?

These are difficult questions to answer, and answers will depend, in part, on the investigation’s results. Was there a flaw in the AI system that caused it to fail to see a danger that humans would have seen, or was it something that not even the best driver could have prevented? If it was humanly preventable, why didn’t the human safety driver do so?

If the AI failed, then at the very least, AV field testing should be halted until AI and/or sensor improvements to handle this and similar situations have been made and thoroughly tested in conditions where the public is not at risk before even considering a return to public field testing. And if the human operator could have but didn’t prevent the accident, then the whole approach to testing with human backup needs to be re-thought.

But what if the investigation concludes that even a good human driver in a conventional vehicle could not have avoided the accident? How do we decide to go forward with public field testing? Is there even an acceptable level of risk that more such accidents may occur?

Volvo_Uber

Uber has 24,000 of these XC90 self-driving cars from Volvo on order. Image source: Uber.

Before society can effectively address those questions, it needs to address the differences in how AVs are perceived. The first thing to realize is that the existing risk with humans driving is not insignificant. According to statistics put out by the National Safety Council, there were more than 40,000 motor vehicle deaths in the U.S. during 2017 and 4.57 million injuries serious enough for hospitalization. Those are the numbers that proponents expect AVs to dramatically reduce.

The next point to consider is that nothing made by humans (including other humans) is perfect. Those who wish to block public AV testing until it is proven to be flawless are essentially saying “never” to the whole idea of machine-piloted vehicles. Likewise, those thinking that AVs will one day eliminate all accidents are being naïve. A perfect, accident-free AV will never exist.

On the other hand, one advantage that machines have over humans is that they can all be made to behave the same way in the same circumstances. There is no ego, road rage, exhaustion, distraction, illness, indifference, haste, panic, or inexperience to contend with. Furthermore, when accidents do occur, there is an opportunity to learn from them how to improve the operation of all AVs, implement that improvement, and have a permanent boost to their performance.

A third thing to understand is that, at some point, public field testing must take place before AVs can be declared acceptable. It is simply not possible to anticipate all of the circumstances that will arise in the course of an AV’s operation. We can predict, mimic, and test for many, perhaps even most, possible scenarios, but not all. Furthermore, even for scenarios that we can predict, a test conducted under controlled conditions is not a perfect predictor of field results. The old military adage that a battle plan never survives first contact with the enemy applies just as well to system testing. The field environment is always messier than the lab’s.

So if field testing is a necessary step and, when taken, is certain to eventually result in a failure (and possibly another fatality), then how do we decide when and how to proceed from here? What is an acceptable level of risk when a random human’s life may be at stake, and who makes that decision? As EE Times has asked, Is Robocar Death the Price of Progress? Alternatively, are we willing to abandon AVs and continue with the current levels of human-driver deaths? These are questions that we need to collectively answer.

My vote is for a multi-phase process going forward. Learn as much as possible about this accident and implement ways to reduce or eliminate a repetition. Test AVs extensively, continually learning and improving their operation. Test first in a partially constrained environment such as an urban setting wherein only AVs are running and pedestrians are aware of and accept the risk of interacting with them. Demonstrate that the AVs outperform human drivers by some multiple miles driven per accident in each scenario before expanding field testing to more open conditions.

Personally, I am looking forward to the days of widespread AVs dominating or even exclusively populating the roadways. I have seen too many drivers who are unskilled, unaware, impaired, irresponsible, overconfident, risk-addicted, or ego- or anger-driven to hope that we can substantially reduce the death rate that human drivers are causing. I have much higher hopes for the era of the autonomous vehicle.

What’s your opinion?

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply