Advertisement

Autonomous vehicle adoption will likely rise, or fall, on safety and trust

If drivers don’t trust their vehicles, the fate of autonomous cars can go into a tailspin

By Jean-Jacques DeLisle, contributing writer

Autonomous, self-driving, and driverless cars are a hot-topic discussion on the table for ride-hailers, tech giants, and investors. The upcoming vehicle revolution may largely be controlled by the perceived safety and trust of fully autonomous vehicles, as many future applications are riding on the availability and legalization of driverless cars doing the lion’s share of the work. This could be a delayed future and involve a deeper level of politics and legislation than many high-tech emerging applications.

The main concern for many driverless car applications is that most people are still more comfortable getting into a car with a complete stranger over an intelligent machine designed for the task. In a Gartner survey conducted in the U.S. and Germany, 55% of respondents wouldn’t even consider riding in a fully autonomous vehicle, while 71% said that they would consider riding in a self-driving car with a driver behind the wheel.

Autonomous_Car_Trust

Image source: Intel’s YouTube channel.

“Fear of autonomous vehicles getting confused by unexpected situations, safety concerns around equipment and system failures, and vehicle and system security are top concerns around using fully autonomous vehicles,” said Mike Ramsey, research director at Gartner.

However, the survey did indicate that current users of on-demand mobility services such as Uber, Car2Go, ZipCar, and Lyft are more likely to ride in and purchase a semi-autonomous or fully autonomous car. “This signifies that these more evolved users of transportation methods are more open toward the concept of autonomous cars,” said Ramsey.

Tech giants such as Intel and Google, and associated technology companies, like Mobileye, are working on pioneering methods to both create safe technologies and convince the public that they are truly safe.

The model proposed by Professor Amnon Shashua, Mobileye CEO and Intel senior vice president, is the Responsibility Sensitive Safety model, which offers specific and measurable parameters for what a human would consider responsible and cautious behavior. This model also aims to dictate that a machine-driven vehicle would only operate in modes that ensure that it will not be the cause of any accidents, regardless of the actions of other vehicles. This definition, of course, is designed to be implemented based on legislation on what is considered fault and safety.

A substantial problem that all of these organizations, governments, and society at large must deal with is the same problem that plagued Detective Del Spooner, played by Will Smith, in “I, Robot”: Who has the right to decide which party should suffer in an accident, if it does occur? This problem, commonly known as the “trolley problem,” is a debate over how a machine, faced with a choice among victims, decides which person, parties, or property should suffer the damage.

Researchers with Carnegie Mellon and MIT have proposed a model based on artificial-intelligence-based robotic voting systems, which is designed to leverage crowdsourced data to create on-demand decision criteria. During an accident, when there is likely little time to host a crowdsourced poll of what the outcome should be, these AI systems would use voting metrics based on prior human voters to determine the victim, a system known as robot voting. “We are not saying that the system is ready for deployment,” said Ariel Procaccia, a professor at Carnegie Mellon. “But it is a proof-of-concept showing that democracy can help address the grand challenge of ethical decision-making in AI.”

Another question that further complicates this debate is: What happens if these systems are tampered with or fail to work properly? It may be extremely difficult to determine even if the systems worked as intended, as emulating the incident and checking if a repeated result occurs may be impossible. Who would then be liable for the failure? It could be the manufacturer, software provider, owner, rider, or even government organization that required the system. Also, we’re left wondering who is in charge of deciding what is an acceptable level of risk, as it’s unlikely that any of these systems would be fool-proof. Though we may not, at this time, have to contend with fighting against a threatening AI overlord without any human morality, current generations of intelligent machines do offer us many questions about morality and societal ethics.

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply