This article is more than 1 year old

Maybe there is hope for 2020: AI that 'predicts criminality' from faces with '80% accuracy, no bias' gets in the sea

Springer ditches paper from research tome, boffins rail against junk science

Updated Springer Nature has decided against publishing a paper describing a neural network supposedly capable of detecting criminals from their faces alone. Word of this decision comes as top boffins signed a letter branding the study harmful junk science.

The missive, backed this week by 1,168 researchers, students, and engineers, and addressed to the academic publisher's editorial committee, listed numerous studies that rubbished the suggestion criminality can be predicted by algorithms from something as trivial as your face.

“Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years,” the letter read. “Nevertheless, these discredited claims continue to resurface, often under the veneer of new and purportedly neutral statistical methods such as machine learning, the primary method of the publication in question.”

The neural net in question – developed by a PhD student and two assistant professors at Harrisburg University in Pennsylvania – was said to be “80 percent [accurate] and with no racial bias."

The experts argued in the letter, however, such a feat is impossible. “Let’s be clear: there is no way to develop a system that can predict or identify 'criminality' that is not racially biased — because the category of 'criminality' itself is racially biased,” they said. By that, they mean, as discussed in footnote 12 of their letter, the color of your skin may influence whether or not you are arrested and labeled a criminal. For example, a Black person in a poor neighborhood could get a criminal record for drinking in public where that is forbidden, while a White person in a park in an affluent neighborhood gets away with a verbal warning.

Now, Springer Nature says it decided on June 16 to throw out the paper, titled A Deep Neural Network Model to Predict Criminality Using Image Processing, and will not run it, as planned, in its research book series, Transactions on Computational Science and Computational Intelligence.

AI facial recog – it's all about as terrifying as this hokey stock picture suggests

The infamous AI gaydar study was repeated – and, no, code can't tell if you're straight or not just from your face

MORE LIKE THIS

“The paper was submitted to a forthcoming conference for which Springer had planned to publish the proceedings in the book series Transactions on Computational Science and Computational Intelligence,” a spokesperson for the publisher told The Register on Tuesday. "After a thorough peer review process the paper was rejected and therefore will not be published by us."

We note that Harrisburg University removed from its website a press release heralding the study, and said its authors were working to update the paper to address ethical concerns. “Academic freedom is a universally acknowledged principle that has contributed to many of the world’s most profound discoveries,” the American college said in a statement last month.

“This university supports the right and responsibility of university faculty to conduct research and engage in intellectual discourse, including those ideas that can be viewed from different ethical perspectives. All research conducted at the University does not necessarily reflect the views and goals of this University.”

The controversial study was written by PhD student and New York City police veteran Jonathan Korn, along with Nathaniel Ashby, an assistant professor of cognitive analytics, and Roozbeh Sadeghian, an assistant professor of data analytics. Sadeghian told El Reg he has since removed his name from the latest version of the paper due to “some disagreements among the authors about the applications.”

Korn and Ashby did not immediately respond to The Register's request for comments. ®

Updated to add

Springer Nature has been in touch to outline the timing of its decision to not include the paper in its research book.

"We acknowledge the concern regarding this paper and would like to clarify at no time was this accepted for publication," a spokesperson said.

"It was submitted to a forthcoming conference for which Springer will publish the proceedings of in the book series Transactions on Computational Science and Computational Intelligence and went through a thorough peer review process. The series editor’s decision to reject the final paper was made on Tuesday, 16th June and was officially communicated to the authors on Monday, 22nd June.

"The details of the review process and conclusions drawn remain confidential between the editor, peer reviewers and authors."

More about

TIP US OFF

Send us news


Other stories you might like