Facebook admits it was 'too slow' to ban Myanmar regime

But, hey, it's not like it had been warned hundreds of times over several years...


Analysis Facebook has banned 20 organizations and individuals in Myanmar, including the country's commander-in-chief, following a United Nations report formally accusing the military regime of serious human rights abuses.

Despite having received years of complaints about how the authorities were using Facebook to spread hateful rhetoric about its minority Rohingya Muslims, the social media giant has failed to act effectively.

It finally did so in the wake of the UN report detailing allegations of murder, imprisonment and sexual violence against the Rohingyans, and issued yet another apology in which it said it had been "too slow" to act against the "hate and misinformation" that was pushed through its service.

In a statement on Monday, Facebook cited the UN report for justification in banning the organizations as well as General Min Aung Hlaing and noted it was the first time it has banned a state actor from its platform.

"We want to prevent them from using our service to further inflame ethnic and religious tensions," the company said, promising to keep an eye on things in future. But the company has long been warned about how its service was being used as a weapon and critics note is has done little to effectively tackle the problem.

Facebook was first warned back in 2013 that its service was being used to spread dangerously false and hateful messages in Myanmar - and did nothing. Then, in the middle of 2014, its service was used to spread false rumors that a Muslim man had raped a Buddhist woman. That sparked a series of riots that killed two people, injured many more, and caused the Myanmar government to call for a meeting with Facebook execs.

Yeah, could you email me a link?

Facebook reportedly told government representatives to email any future examples of dangerous information, promising to review them. That system was woefully inadequate, however, and may even have encouraged the authorities to push or post their own divisive messages.

One year later, in 2015, civil society groups started complaining loudly that the issue was getting worse and pointed to the fact that Facebook only had two Burmese-speaking moderators as evidence that it was not taking the issue seriously.

But it was only three years after that, when the US Congress raised concerns at a meeting with Facebook CEO Mark Zuckerberg, that the company started making serious efforts and blocked some of the many thousands of hate-filled messages on its site.

The Congressional comments also set off an investigation by Reuters, which issued a report earlier this month detailing over 1,000 posts and videos that viciously attacked Rohingyans, accusing them of being maggots, dogs, pigs, rapists and so on, often with explicit threats of physical violence.

Meanwhile, Facebook refused to say how many Burmese-speaking moderators it had, claiming that such a figure would be "misleading" because moderators don't need to speak a language to tackle issues like nudity.

Of course, what campaigners were concerned about was not nudity but posts like the review of a Rohingyan restaurant that read: "We must fight them the way Hitler did the Jews, damn kalars!"

After the Reuters report was published, Facebook finally admitted in a blog post that it had 60 Burmese-speaking moderators and was planning to hire another 40. But in the same paragraph it stressed that it was working on "building artificial intelligence tools that help us identify abusive posts" and that its team was focused on "working with civil society and building digital literacy programs for people in Myanmar."

Palming the problem off

Previously, Facebook has suggested that the solution to hate-speech lie in the hands of civil society groups reporting incidents to Facebook, rather than, say, active moderation on Facebook's part.

Those same civil society groups have complained that Facebook's AI approach is largely worthless because it works by searching for specific Burmese keywords rather than use an actual intelligent reading of posts, including context.

Facebook has also not deleted posts that have been flagged as hateful, deciding only to "de-rank" them. Only posts with clear, explicit threats of violence are deleted. Questions over the efficacy of its approach has been repeatedly met with Facebook PR-speak about identifying abuses rounded with touchy-feely nonsense about people enjoying "the benefits of connectivity."

UN reports are not known for their speediness which makes it all the most worrying that a company working at internet speed would wait until its publication to take significant action.

In fact, Facebook's persistent failure to get ahead of the abuse on its platform, relying only on small specific actions saturated in public relations spiel once it has been thoroughly embarrassed, has focused attention on the company's failings.

Cultural failings

As an engineer-led company with a highly controlling central figure in CEO Zuckerberg, Facebook has many of the same characteristics of authoritarian regimes and totalitarian systems, academics have noted.

Facebook certainly has the money to fix many of its problems but stubbornly refuses to do so, relying instead on fake apologies, PR pushes around specific actions, misleading options, and a stupidly stubborn insistence that you can code your way around any human problem.

In short, Facebook is a company that is completely incapable of handling any problems larger than a few thousand well-educated American students. While its platform's popularity has grown beyond any reasonable expectation, its founder – and the central figure in its evolution – has barely left the dorm room. ®

Similar topics


Other stories you might like

  • Prisons transcribe private phone calls with inmates using speech-to-text AI

    Plus: A drug designed by machine learning algorithms to treat liver disease reaches human clinical trials and more

    In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

    A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

    In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

    Continue reading
  • Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise

    Another terrible launch, but DICE is already working on improvements

    The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show – Battlefield 2042.

    I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

    The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

    Continue reading
  • American diplomats' iPhones reportedly compromised by NSO Group intrusion software

    Reuters claims nine State Department employees outside the US had their devices hacked

    The Apple iPhones of at least nine US State Department officials were compromised by an unidentified entity using NSO Group's Pegasus spyware, according to a report published Friday by Reuters.

    NSO Group in an email to The Register said it has blocked an unnamed customers' access to its system upon receiving an inquiry about the incident but has yet to confirm whether its software was involved.

    "Once the inquiry was received, and before any investigation under our compliance policy, we have decided to immediately terminate relevant customers’ access to the system, due to the severity of the allegations," an NSO spokesperson told The Register in an email. "To this point, we haven’t received any information nor the phone numbers, nor any indication that NSO’s tools were used in this case."

    Continue reading

Biting the hand that feeds IT © 1998–2021