This article is more than 1 year old

Should a robo-car run over a kid or a grandad? Healthy or ill person? Let's get millions of folks to decide for AI...

Survey results: Bad news for the poor, overweight, old

The question of the infamous trolley problem for self-driving cars has finally been answered – by humans. The people have spoken. Neural networks, take note...

Imagine a robo-ride is about to crash into either a kid or a bunch of elderly people. It cannot brake in time, nor swerve out of the way. Where should it go? Who should it hit, or rather, who should it spare?

Now, imagine the same scenario but this time the choice is between humans or animals, jaywalkers or law-abiding citizens, males or females, fitties or fatties... you get the idea. How should computers deal with these split-second moral decisions?

Millions of human participants from more than 200 countries answered these hypothetical questions for an experiment dubbed the Moral Machine. It was set up by researchers from MIT and Harvard University in the US, University of British Columbia in Canada, and the Université Toulouse Capitole in France.

The aim was to find out what humans expect of their AI-powered computers of the future, perhaps as guidance for programmers and engineers when building tomorrow's neural networks.

trolley_problem

Graphic showing the results for different subgroups people preferred to spare, according to the Moral Machine results. Image credit: Awad et al. and Nature.

Users faced 13 different possible scenarios with two possible outcomes, and were asked to click on the option they preferred more. The results were published in Nature on Wednesday.

Kill granny, especially if she's fat

It’s not too surprising that most participants favored the young over the old, groups of people over lone pedestrians and humans over animals. They were also more likely to spare rich people over poor people, females over males, and lean people over fat people.

The responses were also sorted by country to examine different cultures. The researchers found that the likelihood for choosing the young over the elderly dropped in “collectivistic cultures,” like China, where respect for older people was more important. “Individualistic cultures,” such as the US, were more likely to pick options where the highest number of people can be saved.

Cars crash, man looks on - illustration.

New work: Algorithms to give self-driving cars 'impulsive' human 'ethics'

READ MORE

People from more poorer, less developed countries sympathized with jaywalkers more than people from richer countries - which probably have more rigid rules around illegal crossing. Nearly all countries valued females over males, apart from Latin America.

“Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision,” the researchers said.

"We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation."

They hope that their results will spur “global conversations to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

The trolley problem is still a hypothetical scenario, and no self-driving cars have been tested in these conditions yet. The first pedestrian fatality from a self driving car occurred when an Uber vehicle failed to fully react to a woman pushing her bicycle across a street at night in America earlier this year. ®

More about

TIP US OFF

Send us news


Other stories you might like