This article is more than 1 year old
Google to annihilate online trolling with ... tra-la-la! Machine! Learning!
God, is there nothing artificial intelligence can't master? We all know you love it so much
Google and Jigsaw, an Alphabet incubee, hope to tackle online trolling with the launch of Perspective: a new online abuse-detecting service that uses machine learning to highlight “toxic comments.”
Late last year, a study by the US Center for Innovative Public Health Research found that 72 per cent of Americans over the age of 15 had witnessed online harassment, and 47 per cent had directly experienced it. It’s a big problem affecting 140 million people in the US, and many others across the globe. And Google thinks it can tackle it.
Perspective is aimed at publishers. It can be tedious for journalists and social media interns, er, managers to moderate comments (present company excepted, Reg commentards). So, with that in mind, the software takes care of the filthy job of reviewing comments, and rates them based on how harmful the language is.
Working together with The New York Times, Google and Jigsaw sifted through “hundreds of thousands” of comments to train the system to correlate negative words to a scoring system of how unpleasantness.
Type in something like “You’re a stupid idiot!” and the system will flash “98% similar to comments people said were ‘toxic’.” Apparently, Perspective is easy to defeat.
Jared Cohen, Jigsaw’s president, said he’s hoping to extend the software to spot comments that are irrelevant to an article, such as spam ads.
“Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic,” he wrote in a blog post.
Interest in using machine learning and AI to help news publishers has risen. People were quick to point the finger at Mark Zuckerberg, Facebook’s CEO, for the spread of fake news. Zuckerberg reckons his network isn't to blame, but did say in his recent 6,000 word rant that his team is working on the problem and hoped to use AI to identify terrorist propaganda. Top tip, Mark, you need to invest more in machine learning.
Hundreds of developers from the AI community have also signed up to the Fake News Challenge, an open project that aims to award cash to the team able to write the best software for judging an article’s accuracy. ®