Our amazing industry-leading AI was too dumb to detect the New Zealand massacre live vid, Facebook shrugs

Even when it had a copy, it still couldn't stop 300,000 copies from appearing on its site


Facebook admitted, at best nonchalantly, on Thursday that its super-soaraway AI algorithms failed to automatically detect the live-streamed video of last week's Christchurch mass murders.

The antisocial giant has repeatedly touted its fancy artificial intelligence and machine learning techniques as the way forward for tackling the spread of harmful content on its platform. Image-recognition software can’t catch everything, however, not even with Silicon Valley's finest and highly paid engineers working on the problem, so Facebook continues to rely on, surprise surprise, humans to pick up the slack in moderation.

There’s a team of about 15,000 content moderators who review, and allow or delete, piles and piles of psychologically damaging images and videos submitted to Facebook on an hourly if not minute-by-minute basis. The job can be extremely mentally distressing, so the ultimate goal is to eventually hand that work over to algorithms. But there’s just not enough intelligence in today’s AI technology to match cube farms of relatively poorly paid contractors.

Last Friday, a gunman live-streamed on Facebook Live the first 17 minutes of the Al Noor Mosque slayings in Christchurch, New Zealand, an attack that left 50 people dead and many injured. “This particular video did not trigger our automatic detection systems,” Facebook admitted this week. The Silicon Valley giant said it later removed 1.5 million copies of the footage from its website as trolls, racists, and sickos shared the vid across the web.

Facebook has blamed the failure of its AI software to spot the video as it was broadcast, and soon after when it was shared across its platform, on a lack of training data. Today's neural networks need to inspect thousands or millions of examples to learn patterns in the data to begin identifying things like pornographic or violent content.

“This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems,” Facebook said. "However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."

A computer vision system can also be easily fooled with false positives. It’s difficult for them to discern when gunfire is from a real terrorist attack, or from action movies or first-person shooting games. So it’ll always be necessary to fall back on human moderators who can tell the difference until Facebook can train its AIs better.

But even with thousands of people combing through flagged content, the video wasn’t removed until police in New Zealand directly reached out to Facebook to take it down, despite a user reporting the footage as inappropriate twelve minutes after the live broadcast ended. The footage was flagged up by the lone netizen as undesirable for “reasons other than suicide,” and for that reason, it appears to have been bumped down the moderators' list of priorities.

“As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review,” Facebook's Guy Rosen, veep of product management, explained.

Videos are difficult to detect visually and aurally

After the broadcast ended, the video was viewed another 4,000 times, and one or more users managed to capture the footage, and smeared it on other image-sharing sites like 8chan. Australian and New Zealand telcos blocked access to that website and others after the attack to curb the spread of the material.

In the first 24 hours, Facebook's systems, once aware of the footage, managed to remove 1.2 million copies of the video as they were being uploaded to its platform. Even then, a further 300,000 videos made it past the filters, and were removed after they were posted.

Robot on road photo via Shutterstock

Nice 'AI solution' you've bought yourself there. Not deploying it direct to users, right? Here's why maybe you shouldn't

READ MORE

This is because there are ways to skirt around the algorithms. Footage can be recut with slightly different frames, and can have varying levels of quality, to evade detection. Facebook tried to identify the video by matching audio content, too, but again it’s probably not all that effective as it’s easy to remove the original sound and replace it with something innocuous.

“In total, we found and blocked over 800 visually-distinct variants of the video that were circulating,” Rosen said.

"This is different from official terrorist propaganda from organizations such as ISIS – which while distributed to a hard core set of followers, is not rebroadcast by mainstream media organizations and is not re-shared widely by individuals."

Facebook isn't giving up, however, and hopes to use some of that $22bn profit it banked last year to improve its “matching technology” so that it can “stop the spread of viral videos of this nature,” and react faster to live videos being flagged.

“What happened in New Zealand was horrific. Our hearts are with the victims, families and communities affected by this horrible attack,” Rosen concluded. ®


Other stories you might like

  • Ransomware encrypts files, demands three good deeds to restore data
    Shut up and take ... poor kids to KFC?

    In what is either a creepy, weird spin on Robin Hood or something from a Black Mirror episode, we're told a ransomware gang is encrypting data and then forcing each victim to perform three good deeds before they can download a decryption tool.

    The so-called GoodWill ransomware group, first identified by CloudSEK's threat intel team, doesn't appear to be motivated by money. Instead, it is claimed, they require victims to do things such as donate blankets to homeless people, or take needy kids to Pizza Hut, and then document these activities on social media in photos or videos.

    "As the threat group's name suggests, the operators are allegedly interested in promoting social justice rather than conventional financial reasons," according to a CloudSEK analysis of the gang. 

    Continue reading
  • Microsoft Azure to spin up AMD MI200 GPU clusters for 'large scale' AI training
    Windows giant carries a PyTorch for chip designer and its rival Nvidia

    Microsoft Build Microsoft Azure on Thursday revealed it will use AMD's top-tier MI200 Instinct GPUs to perform “large-scale” AI training in the cloud.

    “Azure will be the first public cloud to deploy clusters of AMD's flagship MI200 GPUs for large-scale AI training,” Microsoft CTO Kevin Scott said during the company’s Build conference this week. “We've already started testing these clusters using some of our own AI workloads with great performance.”

    AMD launched its MI200-series GPUs at its Accelerated Datacenter event last fall. The GPUs are based on AMD’s CDNA2 architecture and pack 58 billion transistors and up to 128GB of high-bandwidth memory into a dual-die package.

    Continue reading
  • New York City rips out last city-owned public payphones
    Y'know, those large cellphones fixed in place that you share with everyone and have to put coins in. Y'know, those metal disks representing...

    New York City this week ripped out its last municipally-owned payphones from Times Square to make room for Wi-Fi kiosks from city infrastructure project LinkNYC.

    "NYC's last free-standing payphones were removed today; they'll be replaced with a Link, boosting accessibility and connectivity across the city," LinkNYC said via Twitter.

    Manhattan Borough President Mark Levine said, "Truly the end of an era but also, hopefully, the start of a new one with more equity in technology access!"

    Continue reading
  • Cheers ransomware hits VMware ESXi systems
    Now we can say extortionware has jumped the shark

    Another ransomware strain is targeting VMware ESXi servers, which have been the focus of extortionists and other miscreants in recent months.

    ESXi, a bare-metal hypervisor used by a broad range of organizations throughout the world, has become the target of such ransomware families as LockBit, Hive, and RansomEXX. The ubiquitous use of the technology, and the size of some companies that use it has made it an efficient way for crooks to infect large numbers of virtualized systems and connected devices and equipment, according to researchers with Trend Micro.

    "ESXi is widely used in enterprise settings for server virtualization," Trend Micro noted in a write-up this week. "It is therefore a popular target for ransomware attacks … Compromising ESXi servers has been a scheme used by some notorious cybercriminal groups because it is a means to swiftly spread the ransomware to many devices."

    Continue reading
  • Twitter founder Dorsey beats hasty retweet from the board
    As shareholders sue the social network amid Elon Musk's takeover scramble

    Twitter has officially entered the post-Dorsey age: its founder and two-time CEO's board term expired Wednesday, marking the first time the social media company hasn't had him around in some capacity.

    Jack Dorsey announced his resignation as Twitter chief exec in November 2021, and passed the baton to Parag Agrawal while remaining on the board. Now that board term has ended, and Dorsey has stepped down as expected. Agrawal has taken Dorsey's board seat; Salesforce co-CEO Bret Taylor has assumed the role of Twitter's board chair. 

    In his resignation announcement, Dorsey – who co-founded and is CEO of Block (formerly Square) – said having founders leading the companies they created can be severely limiting for an organization and can serve as a single point of failure. "I believe it's critical a company can stand on its own, free of its founder's influence or direction," Dorsey said. He didn't respond to a request for further comment today. 

    Continue reading

Biting the hand that feeds IT © 1998–2022