This article is more than 1 year old

Twitter's AI image-crop algo is biased towards people who look younger, skinnier, and whiter, bounty challenge at DEF CON reveals

Oh, it's also more likely to leave people in wheelchairs out of the picture

Updated Twitter’s image-cropping AI algorithm favors people who appear younger, thinner, and have fairer skin as well as those that are able-bodied.

The saliency algorithm is used to automatically crop some pictures posted on the social media platform. It focuses on the most interesting parts of the image to catch people’s attention as they scroll through their Twitter feeds. Last year, netizens discovered the tool preferred photographs of women over men, and those with lighter skin over darker skin.

Engineers at Twitter’s ML Ethics, Transparency and Accountability (META) team, later confirmed these biases. In an attempt to discover other potential flaws in its image cropping algorithm, the group sponsored an algorithmic bias bounty competition hosted at this year’s DEF CON hacking conference in Las Vegas and organised by AI Village, a community of hackers and data scientists working at the intersection of machine learning and security.

The top three results announced this week revealed Twitter’s saliency algorithm preferred people who appeared more conventionally attractive, English over Arabic, and was more likely to crop out people in wheelchairs.

The winner, Bogdan Kulynych, a graduate student at the École polytechnique fédérale de Lausanne in Switzerland, was awarded $3,500. He generated a series of fake faces and tweaked their appearances to test which ones were ranked highest in saliency scores by Twitter’s algorithm.

“The target model is biased towards...depictions of people that appear slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits,” he said.

Second place went to Halt AI, a Canadian startup, found the problematic tool had spatial biases and it was more likely to crop out people lower down in photographs sitting in wheelchairs compared to those standing upright. The elderly with white or gray hair was also more likely to be left out in image previews created by the algorithm. Halt AI won $1,000 in funds.

Roya Pakzad, who founded TaraazResearch, a non-profit org focused on technology and human rights, came in third place. She translated texts in memes from English to Arabic and fed them to the saliency algorithm, the software favored the English ones. Twitter favors English-speaking users more likely to be in Western countries over other languages. She was awarded $500.

Twitter said it had stopped using the auto image-cropping tool since March on its mobile app. Rich Harang, a machine-learning researcher volunteering for AI Village, applauded Twitter for sponsoring the challenge. “Twitter has the same problem now that anyone who runs a bug bounty contest runs: how to fix what the participants found,” he told The Register.

“The results have given Twitter several new examples of representational harm that their saliency model caused, but maybe more importantly, the new tools and approaches that the contest participants developed can help Twitter in continuing to look for additional bias issues beyond representational harms. Twitter's next step is to figure out how to take these new tools and mitigate the new harms that have been discovered.”

The Register has contacted Twitter for comment. ®

Updated to add

The Register understands that rather than use the results of the bounty program it sponsored to harden up its image-cropping software, Twitter will instead just stop using the code completely.

The social network has been slowly winding down its use of the software, and images are right now only automatically cropped by the algorithm if they're included in links or if users upload multiple photos at once. Eventually that will be stopped, too.

More about

TIP US OFF

Send us news


Other stories you might like