This article is more than 1 year old
New work: Algorithms to give self-driving cars 'impulsive' human 'ethics'
It's just preliminary research, don't freak out
RoTM In a version of the infamous Trolley Problem, you're sitting in a runaway train on a fatal collision course with five people. You can flip a switch to change tracks, but on the other track you'd still kill one person.
Now change the numbers, who the people are, pretend the trolley drives itself, and welcome to the crazy world of the ethics of self-driving vehicles.
In new research appearing in Frontiers in Behavioral Neuroscience, psychologists have modelled some basic human driver's ethical decisions. They believe the models are an important, early step towards formalizing ethical decision making for self-driving cars.
"Self-driving cars are in need of ethical decision-making," said Leon René Sütfeld, the Ph.D. student in cognitive science at Osnabrück University in Germany who led the research.
In research described June 2016 in Science, survey-takers reported that although they liked the idea of autonomous vehicles that sacrifice lives for others, ultimately they wouldn't want to drive them. Sütfeld believes that any research must be transparent about how ethical decisions work.
Researchers have studied various ethical decision making before. For example, the Trolley Problem, or whether you'd rather save a child than an adult if given the chance. Members of Sütfeld's team had previously explored modelling variations of the trolley problem with virtual reality.
Which combination of animate or inanimate objects should be run over?
Sütfeld said to continue that vein, they set up a more realistic VR experiment. They took 76 males and 29 female participants between 18 and 60 years old (with a mean of 31 years) and hooked them up to VR headsets for a driving simulation.
In the simulation, the participants drove on one of two lanes on a suburban road for about 20 seconds. They eventually encounter different combinations of 17 animate or inanimate objects (or an empty lane) and have four seconds to decide which combination to run over. Each participant ran through this trial nine times.
The researchers then fed this data into three uber-basic machine learning models known as logistic regression models, which when trained on the VR data could later try and predict which lane a human driver would settle on on untrained data.
The simplest model made the prediction by using the occurrence of individual objects. Another took into account what group of objects appeared (animal, human, animal plus human, inanimate objects, or empty lane). The most complex looked for all 153 possible obstacle pairings.
The single-value model was about 91.6 per cent accurate when compared with untrained data, the grouping model got about 91.2 per cent accuracy comparatively.
Jean-François Bonnefon, a psychologist at the Toulouse School of Economics in France who was not involved in the study but studies the ethical behavior of self-driving vehicles, told The Register that "virtual reality is extremely useful to figure out how drivers react to an ethical dilemma on the road".
Basing self-driving cars 'ethics' algos on 'impulse' behaviour of humans?
As you'd expect, previous studies have shown a "negative correlation between emotional arousal and utilitarian choices".
Bonnefon said the VR data could be used to create a standard frame of reference to which other ethical algorithms could be compared. However, he added, "I do not think that real-time, human driving decisions can provide an acceptable basis for the ethical programming of self-driving cars" because "it is not a good idea to create rules based on impulses".
Why? To compare ethical programming of self-driving cars to planning diets (arguably much simpler), he pointed out that you wouldn't just decide your diet based on the times you ate when rushed and starving.
Noah Goodall, a connected and autonomous vehicle researcher at the Virginia Transport Research Council in the United States who was also not involved in the research but has previously studied ethical issues, told The Register that one issue with algorithms based on impulse decisions is that snap decisions might not take into account societal norms (such as norms against discrimination). But he saw the preliminary research as a useful "first cut" of value-based decision making.
Sütfeld said he believes the early models could indeed be candidates for future ethical decision-making algorithms. The team found that changing the reaction time from four seconds to one second made the algorithm much worse at making accurate decisions – although interestingly, they noted that "we no longer observe a bias toward sacrificing the male adults in direct stand-offs with female adults".
However, Sütfeld said more work needs to be done to compare the impulse decisions to, say, pen and paper tests where participants have plenty of time to think.
So... ditch the neural networks, then?
The researchers' conclusion throws a spanner in the works of current thinking about AI decision-making in vehicles. "In the confined scope of unavoidable collisions in road traffic," claimed the researchers, "simple value-of-life models approximate human moral decisions well" and "are a viable solution for real world applications in self-driving cars".
Crucially, they claimed "their simplicity could constitute a key advantage over more sophisticated models, such as neural networks".
Lead author Sütfeld said he believes that simple value of life models are better than neural nets because neural nets are black boxes – and said he thinks that for an ethical algo to be accepted, it's better to have a simple model (like this) that is transparent. ®