This article is more than 1 year old
AI can now tell if you're a criminal or not
Have a small mouth, fat lips, close-set eyes? Oh dear
Through machine learning, researchers have repeated the historic criminology experiment of telling criminals apart from law-abiding people using facial recognition.
Physiognomy, the ability to judge a person’s character from appearance alone, has been around since ancient Greece and was widely accepted by philosophers. Although the theory has generally been discredited and discarded, studies still crop up now and again.
Xiaolin Wu and Xi Zhang, Chinese researchers from Shanghai Jiao Tong University, released a controversial paper on arXiv, an online open-sourced pre-print journal – it has not been published officially.
They have singled out three features that can supposedly tell if a person is more likely to be a delinquent or not by probing upper lip curvature, eye inner corner distance, and the angle from nose tip to two mouth corners (nose-mouth angle).
It’s bad news for those who have smaller mouths, curvier upper lips and closer-set eyes, as you look more like a crook, apparently. On average, criminals have a 19.6 per cent smaller nose-mouth angle, a larger upper lip curvature at 23.4 per cent, and a 5.6 per cent shorter distance between the inner corners of the eyes.
“Unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc, no mental fatigue, no preconditioning of a bad sleep or meal,” the paper said.
It's true that machines don’t have emotions or conscience to be considered subjective, but that doesn’t mean data can’t be biased.
A dataset of 1,856 facial profiles were “controlled” to account for “race, gender, age and facial expressions,” and nearly half were pictures of convicted criminals. Each image was used as an input to four classifiers that were a mixture of classification and learning algorithms, and a convolutional neural network to analyze the relationship between facial features and criminality.
It’s not the first time that AI applications have been dubious. Beauty.ai, who market themselves as the first international beauty contest to be judged by AI, came under fire this year for including skin color and ethnicity as a judging parameter in the competition.
Results showed the machine had a preference for lighter skin, and the crowned beauty kings and queens were nearly all white. ®