This article is more than 1 year old
Facebook apologises after its AI system branded Black people as primates
Plus: Google Health app for NHS hospitals discontinued, and Tesla under the spotlight
In Brief Facebook has apologized for an "an unacceptable error" after its AI systems asked folks who watched a British video about a Black man if they wanted to view more content on "primates."
A former Facebook employee spotted the prompt and reported it, and the biz said it was "looking into the root cause."
"As we have said, while we have made improvements to our AI, we know it's not perfect, and we have more progress to make," a spokesperson told The New York Times. "We apologize to anyone who may have seen these offensive recommendations."
This is not the first time Facebook's algorithms have come under the spotlight – in April its job search system was shown to be less than even-handed. There's plenty of work still to do, it seems.
Tesla investigation deepens
The National Highway Traffic and Safety Administration (NHTSA) has asked Tesla to provide data detailing how its cars behave around parked emergency vehicles on Autopilot mode.
The request comes days after a woman's 2019 Tesla Model 3 ploughed into a police car and a Mercedes SUV parked on the side of a road in Orlando, Florida. No one was hurt in the accident. She said the car was operating in Autopilot at the time, according to CNBC.
"This office is aware of twelve incidents where a Tesla vehicle operating in either Autopilot or Traffic Aware Cruise Control struck first responder vehicles/scenes, leading to injuries and vehicle damage," Gregory Magno, chief at the the agency's Office of Defects Investigation, wrote in a letter [PDF] directed to Eddie Gates, director of Field Quality at Tesla.
Magno also informed Gates that the NHTSA had opened a preliminary evaluation to closely examine all twelve incidents. The agency will be scrutinising all sorts of details from the software and firmware installed in the car, the vehicle's mileage at time of incident, to the date and time Autopilot or "full self-driving" mode was activated.
Are LinkedIn algorithms biased against posts discussing diversity and inclusion at work?
Officials at LinkedIn claim its content moderation algorithms aren't biased or to blame for why posts discussing racial issues against Black people in the advertising industry are sometimes taken down.
- Leaked: List of police, govt, uni orgs in Clearview AI's facial-recognition trials
- A man spent a year in jail on a murder charge that hinged on disputed AI evidence. Now the case has been dropped
- OpenAI's GPT-3-based pair programming model – Codex – now open for private beta testers through an API
- AI algorithms uncannily good at spotting your race from medical scans, boffins warn
Content discussing diversity, equity, and inclusion (DE&I) issues faced by Black people in the workplace on LinkedIn has mysteriously been removed from people's accounts, according to Fast Company. Users fear the company's automated content filter algorithms may be flawed.
But LinkedIn denied its software exhibited these types of systematic bias. The problem is, instead, just down to buggy code or a mistake in its efforts to moderate posts, it said.
"We have a series of complex algorithms that basically look at a variety of factors to help decide for every post and every comment what virality and distribution it gets," said LinkedIn's head of trust product, Tanya Staples. "And it's largely based on, in a lot of cases, the engagement of other members and who's in somebody's network."
Still, Staples insists that these algorithms don't have racial biases baked into them. But people aren't completely convinced. These systems are proprietary and their inner workings are secret, making it difficult to know how the algorithms decide whether to remove a post or not.
Controversial AI medical app Google Streams is shutting down
Google Streams, the AI app used by NHS hospitals to monitor patients for acute kidney failure that was originally developed by fellow Alphabet stablemate company DeepMind, is being canned.
Why exactly, however, is murky. There are a lot of contributing factors that led to its demise, as TechCrunch noted. First, the health unit at DeepMind was swallowed by Google to create a new Google Health department in 2018.
Legal experts said transferring sensitive medical data from millions of patients to another company was unlawful. Second, the five-year contracts working with the NHS hospitals are coming to an end anyway.
There's also one more slight problem: Google Health disbanded last month after its head, David Feinberg, left the company. Staff will continue working on some projects like the Fitbit, according to Google's AI lead, Jeff Dean. ®
@GoogleHealth is no longer just a single team, but a significant company-wide effort that touches many of our products.— Jeff Dean (@🏡) (@JeffDean) August 23, 2021
Moving forward the @GoogleHealth name will encompass all our health initiatives.