Advertisement

AI solves moral decision making using human input

Researchers work together to establish framework for autonomous vehicle decision making

Stock_AI_Driver

By Heather Hamilton, contributing writer

One of the most talked about limitations of autonomous vehicles is the issue of “if this, then that,” a thought experiment based on the “trolley problem,” a hypothetical situation in which participants must choose between allowing a runaway train to kill five people or diverting the train to kill one person who wouldn’t otherwise die. If an AI car is driving and encounters a situation in which it must decide to hit either a group of joggers or a child, what might it decide?

Last year, MIT researchers designed a Moral Machine, which asked visitors to answer moral questions regarding just such scenarios. People were asked to determine who the car should kill. The Moral Machine collected 18 million votes from 1.3 million people, and the research is now being explored in the context of autonomous vehicles.

Ariel Procaccia, an assistant professor in computer science at Carnegie Mellon University, partnered with MIT Moral Machine researcher Iyad Rahwan to create, with a team of scientists from both places, an artificial intelligence that is guided by Moral Machine voters.

In a paper, they describe the high number of possible combinations that might appear in a crosswalk, which necessitated the need for the AI to take an educated guess about which person should live, even if that specific combination hadn’t been voted on via the Moral Machine. Luckily, predictive tasks are where machine learning shines.

Procaccia says that the system isn’t yet ready for deployment. “But it is a proof-of-concept, showing that democracy can help address the grand challenge of ethical decision making in AI.”

For many, the continued development and use of AI requires the establishment of a moral framework guiding how exactly these types of ethical decisions are made. Duke University researchers called for just such a thing, advocating for crowd-based methods, like the Moral Machine, as a promising avenue of research. Germany recently released ethical guidelines for self-driving cars, which prioritized human lives and prevented decision making based on things like disability, age, or gender. The Moral Machine requires AI to make decisions based primarily on these sorts of factors.

Others are more critical of crowdsourced morality, arguing that sample bias creates inaccurate results — an online group of self-selecting individuals with internet access and an interest in killer AI to reach a set of conclusions might be vastly different from those who have not been surveyed. Human hypocrisy may also lead to flawed decisions.

The Outline also points out that fewer life-and-death-based decisions may also cause dilemmas that haven’t been sourced to the Moral Machine — should a person drive more slowly on a highway to conserve fossil fuels, for instance. These decisions have no clear answer, but other, more moral, ones can be guided.

Procaccia told The Outline that, if the choice is to run over one person or two, the system will obviously choose one. More ominously, it would also choose a criminal over a person who is not a criminal, and a homeless person over someone who does not appear to be homeless, ultimately revealing a whole lot about the survey contributors.

Sources: Moral MachineA Voter Based System for Ethical Decision MakingMoral Decision Making Frameworks for Artificial IntelligenceThe Outline

Image Source: Pixabay

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply