Advertisement

Autonomous vehicles: dangerous by design

The automotive industry is converging on a detection target of 95% for AVs. Traffic deaths will fall by thousands, but is that really what we should be striving for?

Electronic-products-driverless-autonomous-1

By Brian Santo, contributing writer

A 49-year-old woman was just struck and killed by an autonomous vehicle, inspiring questions about whether AVs are safe. The reflexive response is to note that AVs will be significantly safer than human motorists. It’s the wrong question, and the reflexive response fails to address the more pertinent question: How safe is safe?

The U.S. has been experiencing approximately 35,000 traffic fatalities a year. These are all, by definition, accidents. Companies that put AVs on the road will have adopted safety targets as part of their engineering goals. By definition, any fatality after that is by design .

In early March, global automotive manufacturers, international OEMs, and others met in Berlin at the Tech.AD show to discuss AV technology. Automakers and OEMs are saying that the industry’s goal should be 95% detection rate, according to an attendee.

What 95% means
Given fatality data, what does a 95% safety target for AV driving mean? The number of traffic deaths in the U.S. would drop from tens of thousands to thousands, perhaps even hundreds. That is unquestionably an impressive and desirable result.

Are those results as impressive if you think of the safety target the same way you think of network uptime? Would you still find it acceptable if autonomous vehicles were safe except for 18 days out of the year? That’s still 95%.

Some people might prefer the target to be 99.999% safe, maybe 99.9999%, and that group includes not only me but some of the people directly involved with the development of AV technology.

It should be thoroughly possible to achieve five nines. The industry would have to improve detection systems, complete the installation of the 5G wireless communications networks that AV detection systems will frequently rely on, and test all of those systems separately and together.

Doing all that would take time, however, begging the question: Why delay the introduction of AVs another few years? Performance won’t stay at 95%. That’s just an initial target. Engineers will incrementally improve that figure by fixing each problem as it arises. Eventually, the safety mark will reach two nines, three nines, and even four nines. But here’s the same question, slightly rephrased: Who wants to explain to a jury that the deaths of people in accidents involving AVs were acceptable within the 95% safety parameter widely agreed upon by automotive companies? Who wants to testify that the plan to get down to hundreds of traffic fatalities (maybe even tens!) started with a deliberate decision that 2,000 deaths was an acceptable initial target? Any automotive company CEOs want to weigh in on that one? There’s a comments section below.

Automotive makers are forging ahead anyway. Elon Musk, the founder of the Tesla car company, the same guy who is warning the world that we need regulatory oversight for artificial intelligence (AI) as soon as possible, is one of the biggest instigators of the race to get more AVs on the road sooner rather than later. Tesla is on the road right now with autonomous driving. The traditional automotive companies are all talking about having self-driving cars available commercially in three years or so.

Looming liability
How badly should AV manufacturers want to rush into this, though? Lawyers and insurance companies see the AV market as a bounty of liability cases just waiting to happen. The legal profession seems to be in agreement that the legal framework for bringing liability cases is in place, but that leaves some ugly questions left to be answered by juries. Who is responsible for AVs? The manufacturers? The owners? The operators, if they’re not the owners? Insurance companies?

Mentioning insurance companies brings up a knotty tangential consideration. Will we foist the burden of AV safety on passengers — you and me — by making them (us) responsible for buying insurance? How is it fair to hold passengers in any way responsible for the operation of fully autonomous vehicles?

Reporting on the woman who was just hit uses words like “death” and “fatality.” Phrases that haven’t been used, however, include legal terms such as “reckless driving,” “vehicular assault,” and “vehicular manslaughter.” Some people are already calling for the backup driver to be charged with something, but that eludes the question of what happens in future accidents in which no human is even indirectly involved with the vehicle. Who do you charge? Worse, what exactly is the crime?

The way things are going, it looks like it’s all going to be a matter of product liability. In this context, liability hinges on product defects, and that most certainly involves planning for safety. A 95% target is 5% dangerous by design. Are there any AV system programmers happy with that level of legal exposure?

Designed-in danger
Lawyers will dig into the details. The legal profession is expecting “significant fights over the production of source code in future product liability cases over potential design defects and posed the question whether juries will be able to understand the complexities of a software design case,” according to an attendee’s overview of the Autonomous Vehicle Safety Regulation World Congress held last October.

Toss this in: With AIs getting so sophisticated, it isn’t possible to tell how some of them make decisions, so would relevant source code even exist? If you can’t understand what went wrong, how can you fix it? Given that, should we even allow AVs to use AI that sophisticated?

And that’s before we get to the question of how AIs ought to make decisions. Sometimes a disaster is inevitable. How should a machine be taught to decide which of multiple bad outcomes is the least desirable? That’s a programming issue, and, therefore, a human decision — which is to say, a decision with legal consequences that perhaps should be made in public before such accidents start occurring.

Even with AV tech, human operator error is going to be a factor for a long time. There is no question that different automotive manufacturers will introduce AVs with different levels of self-driving capability — capabilities are likely to differ from model to model within any given maker. Experts are already worried about motorists being confused about what, exactly, their AVs are capable of. Who will be responsible for making sure new AV owners will understand the capabilities of their vehicles? The auto industry’s main points of contact for most drivers are dealership salesmen. Salesmen sell, which means that, right out of the gate, driver safety is not their top priority. It might not even make it into the top 5 for some of them.

Rules and regs
Public opinion is divided on how government should function, however, and those disagreements have ramifications for how any business endeavor is regulated. That will certainly include AVs.

It all starts with corporate America’s growing aversion to responsibility, as evidenced by the increasingly effective steps that companies have been taking to avoid it. If truth in advertising laws applied to legal documents, the end user license agreements (EULAs) that electronics companies are so fond of would be called comprehensive disavowals of liability.

The only tools available to the general public to compel companies to specific actions or behaviors are government rules and regulations. It’s an article of faith for U.S. companies that every new regulation is a fresh instantiation of evil. The communications industry, for example, insists that it conforms to the principles of network neutrality but had conniptions when the FCC actually codified those principles into legal rules. Those companies didn’t stop whining until the latest iteration of the FCC recently negated those rules.

Those were rules that the industry said it supported. Suggest a rule or regulation companies don’t like?

Consequently, government oversight over a lot of things, including AV safety, is a patchwork of effective rules in some places, rules in other places adopted without any mechanisms for enforcement, and a lack of rules in other places.

Which is why it was probably not a coincidence that the first AV-pedestrian fatality was in Arizona.

Many autonomous vehicle technology companies are in California, which has adopted some fairly strict rules about testing AV technology on its roads. Nearby Arizona deliberately enticed those companies to conduct their experiments there with promises that it wouldn’t enact any regulations, according to The New York Times. After the accident, Arizona officials told Reuters that they don’t intend to enact any now.

Arizona is overtly assuring companies that they will face minimal, if any, liability in case of accidents. How can that not lead to lax safety? Arizona has already seen a crash between a human motorist and an autonomous vehicle prior to the recent death. That the human motorist was deemed at fault in that one is weak absolution.

The problem isn’t all on the automotive industry.

You take lousy drivers out from behind the wheel, and the odds are high that some of them will also be lousy pedestrians. That in no way excuses the industry from taking its responsibilities seriously nor the public to cease pestering the industry and government agencies with questions about AV safety.

Attack vector
All of that is vexing enough, but we also have to consider cybersecurity. Criminal hacking is rampant. Cyber-espionage is rampant. AVs are going to be tempting targets.

Cybersecurity experts are doing absolutely incredible work behind the scenes fending off a ceaseless onslaught of hacking attacks, but otherwise, cybersecurity is a sad joke.

Global corporations are successfully hacked with such regularity that the only possible explanation is their indifference to the problem. They are frivolous with our personal data; now we’re supposed to trust them with our lives?

Cyber-espionage has been escalating. Global security experts have been documenting how hackers — some directly associated with various governments and others tacitly encouraged by those governments — have been wreaking havoc around the world. U.S. national security services testify that Russians hacked the 2016 U.S. presidential election and that those hackers are still at work. Some dismiss those claims. No matter who you believe, that the argument is ongoing means that the federal government is not fully focused on countering cyber-espionage.

If we agree that there should be any regulatory oversight of autonomous vehicles, part of that oversight should include compelling AV makers, their vendors, and the networking companies that connect those vehicles to demonstrate that they’ve adopted effective security countermeasures.

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply