This article is more than 1 year old
Our amazing industry-leading AI was too dumb to detect the New Zealand massacre live vid, Facebook shrugs
Even when it had a copy, it still couldn't stop 300,000 copies from appearing on its site
Facebook admitted, at best nonchalantly, on Thursday that its super-soaraway AI algorithms failed to automatically detect the live-streamed video of last week's Christchurch mass murders.
The antisocial giant has repeatedly touted its fancy artificial intelligence and machine learning techniques as the way forward for tackling the spread of harmful content on its platform. Image-recognition software can’t catch everything, however, not even with Silicon Valley's finest and highly paid engineers working on the problem, so Facebook continues to rely on, surprise surprise, humans to pick up the slack in moderation.
There’s a team of about 15,000 content moderators who review, and allow or delete, piles and piles of psychologically damaging images and videos submitted to Facebook on an hourly if not minute-by-minute basis. The job can be extremely mentally distressing, so the ultimate goal is to eventually hand that work over to algorithms. But there’s just not enough intelligence in today’s AI technology to match cube farms of relatively poorly paid contractors.
Last Friday, a gunman live-streamed on Facebook Live the first 17 minutes of the Al Noor Mosque slayings in Christchurch, New Zealand, an attack that left 50 people dead and many injured. “This particular video did not trigger our automatic detection systems,” Facebook admitted this week. The Silicon Valley giant said it later removed 1.5 million copies of the footage from its website as trolls, racists, and sickos shared the vid across the web.
Facebook has blamed the failure of its AI software to spot the video as it was broadcast, and soon after when it was shared across its platform, on a lack of training data. Today's neural networks need to inspect thousands or millions of examples to learn patterns in the data to begin identifying things like pornographic or violent content.
“This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems,” Facebook said. "However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare."
A computer vision system can also be easily fooled with false positives. It’s difficult for them to discern when gunfire is from a real terrorist attack, or from action movies or first-person shooting games. So it’ll always be necessary to fall back on human moderators who can tell the difference until Facebook can train its AIs better.
But even with thousands of people combing through flagged content, the video wasn’t removed until police in New Zealand directly reached out to Facebook to take it down, despite a user reporting the footage as inappropriate twelve minutes after the live broadcast ended. The footage was flagged up by the lone netizen as undesirable for “reasons other than suicide,” and for that reason, it appears to have been bumped down the moderators' list of priorities.
“As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review,” Facebook's Guy Rosen, veep of product management, explained.
Videos are difficult to detect visually and aurally
After the broadcast ended, the video was viewed another 4,000 times, and one or more users managed to capture the footage, and smeared it on other image-sharing sites like 8chan. Australian and New Zealand telcos blocked access to that website and others after the attack to curb the spread of the material.
In the first 24 hours, Facebook's systems, once aware of the footage, managed to remove 1.2 million copies of the video as they were being uploaded to its platform. Even then, a further 300,000 videos made it past the filters, and were removed after they were posted.
Nice 'AI solution' you've bought yourself there. Not deploying it direct to users, right? Here's why maybe you shouldn't
READ MOREThis is because there are ways to skirt around the algorithms. Footage can be recut with slightly different frames, and can have varying levels of quality, to evade detection. Facebook tried to identify the video by matching audio content, too, but again it’s probably not all that effective as it’s easy to remove the original sound and replace it with something innocuous.
“In total, we found and blocked over 800 visually-distinct variants of the video that were circulating,” Rosen said.
"This is different from official terrorist propaganda from organizations such as ISIS – which while distributed to a hard core set of followers, is not rebroadcast by mainstream media organizations and is not re-shared widely by individuals."
Facebook isn't giving up, however, and hopes to use some of that $22bn profit it banked last year to improve its “matching technology” so that it can “stop the spread of viral videos of this nature,” and react faster to live videos being flagged.
“What happened in New Zealand was horrific. Our hearts are with the victims, families and communities affected by this horrible attack,” Rosen concluded. ®