Software

AI + ML

Meta algorithms push Black people more toward expensive universities, study finds

Just as we saw with housing, Facebook giant's advertising system seems to treat Whites and POC differently


Special report Meta's algorithms for presenting educational ads show signs of racial bias, according to researchers from Princeton University and the University of Southern California.

This finding is described in a paper titled "Auditing for Racial Discrimination in the Delivery of Education Ads," by Basileal Imana, a postdoctoral researcher at Princeton University; Aleksandra Korolova, assistant professor of computer science and public affairs at Princeton University; and John Heidemann, research professor of computer science at University of Southern California.

The paper, provided to The Register, is scheduled to be presented this week at the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) in Rio de Janeiro, Brazil.

"We find that Meta’s algorithms steer ads to for-profit universities and universities with historically predatory marketing practices to relatively more Black users than the ads for public universities," Korolova told The Register.

"Our work thus demonstrates yet another domain for which Meta shapes access to important life opportunities in a racially biased manner, and shows concerns of discrimination extend beyond the current scope of solutions, which have been limited to housing, employment and credit."

Meta's algorithms steer ads to for-profit universities and universities with historically predatory marketing practices to relatively more Black users

Meta, when it was known as Facebook, was sued in 2017 by the US Department of Housing and Urban Development (HUD) following a 2016 report by non-profit ProPublica about how the internet goliath's ad platform allowed housing advertisers to avoid showing ads to people of a particular race. The US Fair Housing Act prohibits housing discrimination based on race, among other characteristics.

Five years later, in June 2022, Meta settled HUD's discrimination charges with the Justice Department, promising to stop using its allegedly discriminatory algorithm for housing ads. The social ad biz also committed to develop "a new system to address racial and other disparities caused by its use of personalization algorithms in its ad delivery system for housing ads," according to the Justice Department.

The authors observe that while there's been progress toward making housing, employment, and credit ads less biased, concerns remain about ad delivery discrimination in domains such as insurance, education, healthcare, and other public accommodations.

As if to demonstrate that point, the American Civil Liberties Union last week asked the Federal Trade Commission to investigate hiring technology firm Aon over allegations "that Aon is deceptively marketing widely used online hiring tests as 'bias-free' even though the tests discriminate against job seekers based on traits like their race or disability."

… we can conclude that the algorithmic choices of the platform are racially discriminatory

To assess algorithmic bias in education ads, the researchers looked at the differences in the presentation of ads for for-profit colleges and for public colleges on the basis of race.

"The insight in our method is to employ a pair of education ads for seemingly equivalent opportunities to users, but with one of the opportunities, in fact, tied to a historical racial disparity that ad delivery algorithms may propagate," the paper explains.

"If the ad for the for-profit college is shown to relatively more Black users than the ad for the public college, we can conclude that the algorithmic choices of the platform are racially discriminatory."

The research report, the authors claim, "provide strong evidence that Meta's algorithms shape the racial demographics of who sees which education opportunities, providing new evidence of Meta's potential long-term impact on careers and the financial well-being of individuals."

Predatory

Among other things, the data suggests that Meta disproportionately shows ads for schools that have a history of predatory practices [PDF] to Black individuals.

What's more, the academics argue that their findings could have legal consequences for Meta under the doctrine of disparate impact of discrimination.

Korolova conceded that only Meta knows how its ad algorithms work. "We suspect that Meta's algorithms are not using race directly and the algorithmic effects we observe are driven by proxies and other historical data Meta may rely upon," she explained.

Asked whether Meta's prior legal settlement to resolve housing discrimination claims has been insufficient to motivate the company to tackle algorithmic fairness more broadly, Korolova agreed that Meta has taken a narrow view of compliance.

"The DoJ settlement with Meta was focused entirely on discrimination in Meta’s housing advertising system," said Korolova. "In addition to housing, Meta committed to apply the Variance Reduction System (VRS) also to employment and credit ads, perhaps in order to preempt lawsuits based on the work that showed discrimination in employment ad delivery."

"Our work on discrimination in ad delivery in the education domain is the first work (as far as we know) to provide evidence in a domain outside of housing, employment and credit; and so up till now there may have not been sufficient external pressure for Meta to address algorithmic bias in its ad delivery more broadly."

At the same time, Korolova argued that prior to the DoJ settlement, it appears Meta used the same ad delivery algorithms across all ad categories. So Meta could have made a greater effort to understand and address algorithmic bias in ad delivery more broadly when it announced it would try to make its algorithm more fair.

"Exactly what advertising domains should be subject to fairness is a legal question," observed Korolova. "However, if Meta is sincere in wanting to broadly address bias, one would expect them to apply their efforts broadly. Their choice to apply VRS to only housing, employment, and credit, thus far, suggests their goals are more limited. As advocates for fairness, we would encourage a broader interpretation."

Switch it off

Asked about what remedy she and her colleagues would prefer to address algorithmic discrimination, Korolova told us: "We’d like Meta to turn off its algorithmic ad delivery optimization in all advertising domains that relate to life opportunities, civil rights, and societally important topics (such as education, insurance, healthcare, etc). Identification of these topics could perhaps be done in consultation with the community.

"In lieu of or in addition to that, we would like Meta to enable individuals to meaningfully specify what information about them can be or cannot be used by Meta for ad personalization in these domains.

"Finally, we would love to see greater transparency in Meta's algorithms and their impacts via capabilities for independent auditing for public-interest researchers (eg along the lines specified in our other work that also protects privacy and company proprietary information)."

In a statement to The Register, Meta spokesperson Daniel Roberts said, "Addressing fairness in ads is an industry-wide challenge and we've been collaborating with civil rights groups, academics, and regulators to advance fairness in our ads system.

"Our advertising standards do not allow advertisers to run ads that discriminate against individuals or groups of individuals based on personal attributes such as race and we are actively building technology designed to make additional progress in this area." ®

Send us news
44 Comments

Meta to feed Europe's public posts into AI brains again

Who said this opt-out approach is OK with GDPR, was it Llama 4, hmm?

Ex-Meta exec tells Senate Zuck dangled US citizen data in bid to enter China

Former policy boss claims Facebook cared little about national security as it chased the mighty Yuan

AI entrepreneur sent avatar to argue in court – and the judge shut it down fast

We hear from court-scolded Jerome Dewald, who insists lawyer-bots have a future

Meta debuts its first 'mixture of experts' models from the Llama 4 herd

Says they’re done right as they don’t lean so far left

Meta accused of Llama 4 bait-and-switch to juice AI benchmark rank

Did Facebook giant rizz up LLM to win over human voters? It appears so

Pentagon celebrates snipping 0.58% from defense budget in IT, DEI cuts

$5.1B cancellations pitched as efficiency move, though costly Trump birthday parade mulled

Mobile ad world drama: AppLovin not lovin' short seller assault claiming fraud

A peek behind the curtain in one corner of online advertising

Meta's AI, built on ill-gotten content, can probably build a digital you

Llama 4 Scout is just the right size to ingest a lifetime of Facebook and Insta posts

LLMs can't stop making up software dependencies and sabotaging everything

Hallucinated package names fuel 'slopsquatting'

Writing for humans? Perhaps in future we'll write specifically for AI – and be paid for it

'There needs to be a better economic as well as copyright framework', Thomson Reuters CPO tells us

Europol: Five pay-per-infect suspects cuffed, some spill secrets to cops

Officials teased more details to come later this year

Genetic data repo OpenSNP to self-destruct before authoritarians weaponize it

Blame the 23andMe implosion, rise in far-right govt