Detroit Police make second wrongful facial-recog arrest when another man is misidentified by software
Plus: IB students kick up a stink over algorithm grading, and more
In brief A man was charged after he was mistakenly identified as a thief by facial-recognition software used by the Detroit Police Department.
Michael Oliver, a resident of the Motor City, was arrested last year for a crime he didn't commit. He was accused of reaching into a car window to snatch and destroy a cellphone.
But the images of the suspect captured during the incident are not of Oliver (see here). Unfortunately, the facial-recognition software couldn't tell the difference and Oliver was wrongfully arrested by cops last year.
It should have been a giveaway – the suspect had no tattoos, while Oliver's arms are covered with ink. A judge dismissed the case after prosecutors were convinced that there had been a misidentification, Detroit Free Press reported.
This is the second wrongful arrest by the Detroit Police; they arrested a Michigan dad on his front lawn in front of his wife and kids earlier this year - also due to botched facial recog software.
"We warned Robert Williams would not be the only person to be wrongfully accused of a crime they did not commit because of a flawed technology law enforcement should not be using," the American Civil Liberties Union said. Detroit's police chief has already admitted its software misidentifies people 96 per cent of the time.
Didn't get into uni? Blame the algorithms
Students in the International Baccalaureate (IB) programme have been graded by algorithm after their exams were cancelled during the COVID-19 pandemic.
The software deployed by the IB educational foundation frequently marked down people's grades, causing many to lose scholarships or places at university. More than 15,000 students, teachers, and parents have signed an online petition to make the algorithm fairer, according to Wired.
IB said the software takes into account test scores from previous exams, but has provided little transparency into how it actually works. It released statistics that showed average scores were actually higher compared to last year's, and that the distribution of grades was similar to last year's results too.
Fake journos with AI-generated profile photos
Several right-wing media sites have been tricked into spreading political propaganda by publishing articles written by people pretending to be journalists. These hacks have sham online personas often fronted by fake profile pictures created by computer vision models.
It's not difficult to pretend to be someone else on the internet. Sites like thispersondoesnotexist.com allow anyone to pick out a convincing-looking avatar. The images generated by Nvidia's StyleGAN are completely machine-generated, and the people depicted don't actually exist.
It's a perfect tool for spinning up fake social media accounts. Some of these forged online personas have masqueraded as journalists to peddle propaganda. A whole network of these dodgy hacks, discovered by The Daily Beast, was created to spout disinformation and opinions about the Middle East and China.
Twitter has suspended 16 of those accounts for breaching "policies on platform manipulation and spam".
AI app to grade tuna
A popular Japanese sushi restaurant chain has rolled out a smartphone app to help its fish buyers assess the quality of fresh tuna with the help of machine-learning algorithms.
Kura Sushi developed the Tuna Scope app to help people inspect the meat over images so they don't have to travel to the fish markets during the coronavirus pandemic. It processes photos of tuna and attempts to estimate the meat's firmness and fat content by appearance alone.
The app then judges the quality of the fish on a "three-point scale", according to Japanese newspaper The Asahi Shimbun. The software uses a computer-vision model trained on more than 4,000 images of pieces of tuna that have been graded by humans. Kira Sushi said some of the tuna sold in its sushi will have been bought using the app as of this week. ®