Aid groups use AI-generated ‘poverty porn’ to juice fundraising efforts

Researchers accuse tech firms of profiting from exploitative AI imagery

The starving child whose picture broke your heart when you saw it on a charity website may not be real. Global health researchers say that stock image companies like Adobe are profiting from AI-generated "poverty porn" that non-profits are using to drum up donations.

In an article published in Lancet Global Health, Arsenii Alenichev, a researcher at the Institute of Tropical Medicine in Antwerp, Belgium, reports that, despite years of pushback in the global health community to discourage the exploitative use of images of suffering, generative AI has compounded the problem by making image generation easily accessible and affordable.

Alenichev and co-authors Sonya de Laat, Mark Hann, Patricia Kingori, and Koen Peeters Grietens recently collected more than 100 AI-generated images from various social media sites such as LinkedIn and X. Many of these images, they say, "replicate the emotional intensity and visual grammar of poverty porn and dated fundraising imagery." The term "poverty porn" isn't precisely defined, but generally refers to images or videos that exaggerate poverty or suffering to evoke guilt and drive donations.

Concerns about the exploitative use of real imagery go back many years, and have tripped up respected organizations like Doctors Without Borders/Médecins Sans Frontières (MSF), which issued an apology about photo ethics concerns in 2022.

In 2023, academics called out the exploitative use of images of malnourished children for fundraising by non-profits and highlighted the bias embedded in AI images, "despite the AI developers' stated commitment to ensure non-abusive depictions of people, their cultures, and communities."

That same year, Amnesty International removed an AI-generated image of a protester in response to criticism. One of the defenses offered for showing a fake protester is that AI-generated people can't be targeted for retaliation

Meanwhile, AI firms have made voluntary commitments to disallow image-based sexual abuse but have content policies that don't really address "poverty porn," not to mention other problems like disinformation.

Alenichev et al. observe that, for smaller organizations, AI-generated images of suffering children, farmers, or patients, framed with moralizing text, can drive an entire ad campaign. And these groups, they say, appear to believe that, by not showing real people, they're exempt from common ethical concerns that come up when presenting suffering.

"A troubling epitome of this trend might be observed in the fact that globally influential tech companies, such as Adobe, are profiteering from AI-generated images depicting stereotypical and exploitative imagery of Black and Brown bodies experiencing vulnerability and poverty," the authors state.

Asked to comment, Adobe responded by providing background information about how Adobe Stock is a marketplace through which creators can upload and license content. The company allows generative AI subject to submission standards.

Freepik, a creative content platform service based in Málaga, Spain, currently provides a way to filter for AI images – to exclude them or see nothing but AI images. Those searching AI-generated images with the keyword "poverty" will see mostly people with black and brown skin, which may or may not be seen as problematic.

Jose Florido, chief market development officer at Freepik, explained in an interview that the company provides a platform that connects content makers and content buyers.

"We review everything that is uploaded to the current regulation," Florido explained. "And if it's okay with the current regulation and if there is demand for it, we try to kind of stay away from what is the potential use because, for every type of image, even for very biased images, there are potentially good and totally okay use cases. And there are obviously use cases that can be problematic."

Freepik, in other words, polices images as required by law but concerns like "poverty porn" – which isn't clearly defined – are left to those who source their images from the platform.

To the extent that lawmakers choose to address this issue, Florido said that the company would welcome clear rules, especially if they're part of an internationally accepted framework. 

Florido said Freepik has been trying to bring diversity to its images and not just with regard to AI. "There's a lot of bias in development photography," he said. "In the past, for example, if you searched for 'CEO,' you'd only see men. So we balance it to show more diversity."

But it's a difficult issue that's never quite done, said Florido, pointing to how Google's efforts to make its AI images more diverse resulted in requests for specific historical figures being implausibly diverse.

Fairpicture, a digital content business based in Bern, Switzerland, claims to provide photo and video production services that are "dignified, authentic, and free of stereotypes." The company's site says that it "does not serve cliches" and that all the subjects of its photos are fairly paid for their appearances in the pictures.

While generative AI may address the communication and marketing needs of budget-constrained organizations, it may also end up eroding public trust and harming global health, the researchers say. 

What's more, generative AI appears to be ill-suited for fundraising. In 2022, a separate group of academics looked at the impact of synthetic (AI) content on charitable giving. They found when people are aware an image is fake or generated by AI, it "has a negative impact on donation intentions."

Alenichev et al. argue that the rapid development and adoption of generative AI requires accountability and transparency. 

"The use of AI and the prompts that underlie the images should be disclosed, these prompts especially could curtail the amplification of poverty porn via AI imagery," they wrote. "It is now pivotally important to support local photographers and artists and their attempts to create global health representation beyond the established norms." ®

More about

TIP US OFF

Send us news


Other stories you might like