Civil-rights probe: Facebook has completely failed to… Zuck: Look over here! We’ve banned four groups! Go me!

Report slams antisocial network's 'vexing and heartbreaking decisions'

Facebook on Wednesday published an independent-ish report by civil-rights experts into how it deals with misinformation and hate speech on its platform. The dossier wasn't exactly flattering, and the antisocial network immediately tried to undercut it with an announcement about how it had banned four groups from its site.

The civil-rights audit was damning [PDF]. It said decisions being made by the social media giant over controversial content remain “too reactive and piecemeal,” and even after two years of auditing work, it still needs “a more coherent and positive plan of action.”

The report lists a series of changes that Facebook has made following significant pressure from civil-rights groups and lawmakers, but then immediately notes that they “could be obscured by the vexing and heartbreaking decisions Facebook has made that represent significant setbacks for civil rights.”

Most notable of course is the decision not to delete, edit, censure or in any way touch a series of posts by President Trump that were widely condemned, including by other social media networks. In one, the President lied about the risks of mail-in voting; in another he used the racially charged phrase “when the looting starts, the shooting starts,” and later claimed not to know its White supremacist origins.

“Allowing the Trump posts to remain establishes a terrible precedent that may lead other politicians and non-politicians to spread false information about legal voting methods, which would effectively allow the platform to be weaponized to suppress voting,” the report says about the first set of comments.

And the audit says it was “deeply troubled” by Facebook’s decision not to touch the second. It makes it plain that the statement crossed a clear and identifiable red line “especially when made by those in power and targeted toward an identifiable, minority community, condone vigilantism and legitimize violence against that community.”


The report picks apart the justification given by Facebook and by its CEO Mark Zuckerberg, concluding that they “could not be squared with Mark Zuckerberg’s prior assurances that it would take down statements that could lead to ‘real world violence’ even if made by politicians.”

The report goes on to note that: “As the final report is being issued, the frustration directed at Facebook from some quarters is at the highest level seen since the company was founded, and certainly since the Civil Rights Audit started in 2018.”

These bold criticisms are all the more extraordinary given that Facebook has directed significant time and effort into ensuring the report was a positive one. It is to the authors’ significant credit that they did not succumb to corporate pressure and paint, if not a glowing picture, then a numbingly neutral one.

The reports provided a series of recommendations for improving the situation and warns that without such changes it could become an “echo chamber” of extremism. “The company must recognize that failure to do so can have dangerous (and life-threatening) real-world consequences.”

Faced with such an extraordinary condemnation of its policies and dire warnings about how it makes decisions and the consequence of them, Facebook naturally enough did what it always does: ignored and attempted to undercut it.

Enter Sheryl

“Today, Facebook’s third civil rights audit report is being published, bringing to a close an independent two-year review of our policies and practices by noted civil liberties and civil rights expert Laura Murphy and Megan Cacace, partner in the civil rights law firm Relman Colfax, PLLC. This two-year journey has had a profound effect on the way we think about our impact on the world,” began a blog post by Facebook COO Sheryl Sandberg.

Ignoring the fact that it isn’t the third report but a final version of the same report, and the fact that the report makes it plain that it has not in fact “had a profound effect” on the way its executives think, this is at least a promising first sentence.


The incumbent President of the United States of America ran now-banned Facebook ads loaded with Nazi references


From there, the post is yet another display of Facebook’s mealy-mouthed approach to any form of criticism, full of platitudes and half-promises with no concrete guarantees or even recognition that there is a problem. Instead, Facebook applauds itself for having an audit in the first place.

“When we agreed to become the first social media company to undertake an audit of this kind, at the encouragement of the civil rights community, no one knew that the final report would be published at a time when racial injustice and police brutality is bringing millions of people to the streets — both at home and abroad — to campaign for change,” Sandberg continues.

“We also had no idea that it would be published at a time when Facebook itself has faced heavy criticism from many in the civil rights community about hateful content on our platform and is subject to a boycott by a number of advertisers.”

As a sidenote, earlier this week Facebook execs held two conference calls with civil-rights groups, which left the meeting castigating the mega-corp for, among other things, making it “abundantly clear that they are not yet ready to address the vitriolic hate on their platform.”

Same old snit

In place of self-reflection, Sandberg simply repeated Facebook’s well-rehearsed PR lines that it “stands firmly against hate” and has “clear policies against hate,” while failing to note that the report specifically highlights how those policies are not applied consistently or effectively.

Instead, Sandberg repeated the same lines that Facebook has provided for the past five years when concerns over the content on its platforms started taking off. Facebook will “strive constantly to get better and faster” at enforcing its policies, she said.

The web goliath has, yet again, made “real progress” but feels the need to stress how difficult it is: “This work is never finished and we know what a big responsibility Facebook has to get better at finding and removing hateful content.” It is, yet again, “the beginning of the journey, not the end.” And Facebook, has found itself, yet again, with “a long way to go.”

The actual contents of the report are boiled down to a series of bullet points near the bottom of the post, while Sandberg uses the top half to praise Facebook for having “made significant progress in a number of critical areas.”

There is a series of “commitments,” again, that talk about “enhancing” and “going further” with the policies that already exist and an equal amount of space is given to highlighting the small areas of improvement that Facebook has made.

But such is Facebook’s absolute unwillingness to look itself in the face or accept criticism that almost immediately after it published the report and its heavily slanted blog post, the company also published another post that highlighted how it had banned four groups that it accused of “coordinated inauthentic behavior,” which the social network's code for misinformation and propaganda.

“Today, we removed four separate networks for violating our policy against foreign interference and coordinated inauthentic behavior (CIB). These networks originated in Canada and Ecuador, Brazil, Ukraine, and the United States,” began the post, before providing a lengthy and detailed rundown and explanation of each, complete with tantalizing pictures.

So that's all right then. ®

Similar topics

Narrower topics

Other stories you might like

  • Meta agrees to tweak ad system after US govt brands it discriminatory
    And pay the tiniest of fines, too

    Facebook parent Meta has settled a complaint brought by the US government, which alleged the internet giant's machine-learning algorithms broke the law by blocking certain users from seeing online real-estate adverts based on their nationality, race, religion, sex, and marital status.

    Specifically, Meta violated America's Fair Housing Act, which protects people looking to buy or rent properties from discrimination, it was claimed; it is illegal for homeowners to refuse to sell or rent their houses or advertise homes to specific demographics, and to evict tenants based on their demographics.

    This week, prosecutors sued Meta in New York City, alleging the mega-corp's algorithms discriminated against users on Facebook by unfairly targeting people with housing ads based on their "race, color, religion, sex, disability, familial status, and national origin."

    Continue reading
  • Metaverse progress update: Some VR headset prototypes nowhere near shipping
    But when it does work, bet you'll fall over yourselves to blow ten large on designer clobber for your avy

    Facebook owner Meta's pivot to the metaverse is drawing significant amounts of resources: not just billions in case, but time. The tech giant has demonstrated some prototype virtual-reality headsets that aren't close to shipping and highlight some of the challenges that must be overcome.

    The metaverse is CEO Mark Zuckerberg's grand idea of connected virtual worlds in which people can interact, play, shop, and work. For instance, inhabitants will be able to create avatars to represent themselves, wearing clothes bought using actual money – with designer gear going for five figures.

    Apropos of nothing, Meta COO Sheryl Sandberg is leaving the biz.

    Continue reading
  • Facebook phishing campaign nets millions in IDs and cash
    Hundreds of millions of stolen credentials and a cool $59 million

    An ongoing phishing campaign targeting Facebook users may have already netted hundreds of millions of credentials and a claimed $59 million, and it's only getting bigger.

    Identified by security researchers at phishing prevention company Pixm in late 2021, the campaign has only been running since the final quarter of last year, but has already proven incredibly successful. Just one landing page - out of around 400 Pixm found - got 2.7 million visitors in 2021, and has already tricked 8.5 million viewers into visiting it in 2022. 

    The flow of this phishing campaign isn't unique: Like many others targeting users on social media, the attack comes as a link sent via DM from a compromised account. That link performs a series of redirects, often through malvertising pages to rack up views and clicks, ultimately landing on a fake Facebook login page. That page, in turn, takes the victim to advert landing pages that generate additional revenue for the campaign's organizers. 

    Continue reading

Biting the hand that feeds IT © 1998–2022