Online Q&A site Stack Overflow aspires to be "a welcoming and friendly place" and to make that so, the biz has deployed sentiment-sniffing code to catch unkind commentary lest it drive members of its online community away.
Founded in 2008, the Stack Overflow site depends on people posting questions and answering them, something they may not be inclined to do in the face of textual hostility. It took only until April 2009 for the site to introduce a system by which users could flag comments to bring them to the attention of moderators, who then review said comments to vote on further action like deletion.
In this, Stack Overflow is like a lot of websites that implement some form of content moderation.
But Stack Overflow has taken comment policing further still, with the introduction in July 2019 of Unfriendly Robot V1 (UR-V1), a jumble of code that implements a natural language processing (NLP) training method called Universal Language Model Fine-tuning (ULMFiT) to build a text classification model with the the fastai library.
As described in a post by Stack Overflow developers Jason Punyon and Kevin Montrose, the text checking bot has been trained to recognize "unfriendly comments" using a dataset of comments flagged by site users rather than a specific definition.
Feedback from the Stack Overflow community has found about 7 per cent of comments unfriendly or unwelcoming, a designation distinct from abuse or harassment.
"Our mission is to be a resource for all developers and technologists," explained Desiree Darilek, product manager at Stack Overflow, in an email to The Register. "Expanding our community so that Stack Overflow is accessible and inclusive of everyone is important to us, and reducing unfriendliness helps ensure that new and existing users have a positive experience."
UR-V1 and its successor, UR-V2, appear to be making headway in the effort to suppress bad vibes..
During its deployment, from July 11, 2019 until September 13, 2019, UR-V1 flagged 15,564 comments out of 1,715,693, about 0.9 per cent. Of these suspect comments, human moderators accepted 6,833 bot-proposed flags, a rate of about 43.9 per cent.
Compare that to the 4,870 comments flagged for unfriendliness by people, about 0.2 per cent. Of these, 2,942 were accepted by moderators, a rate of about 60.4 per cent: UR-V1 helped stop more than twice as many unkind comments as the human nice police, after moderator review, though it was significantly less accurate.
Its successor, UR-V2 entered service on September 13, 2019. Since then, there have been 4,251,723 comments posted to Stack Overflow and UR-V2 has flagged 35,341, about 0.8 per cent. Human moderators accepted 25,695 of those programmatic flags, or 72.7 per cent – an improvement in accuracy of almost 30 percentage points.
Stack Overflow makes peace with ousted moderator, wants to start New Year with 2020 vision on codes of conductREAD MORE
Stack Overflow users since then have flagged 11,810 comments as unfriendly, with moderators accepting 7,523 of those flags.
"UR-V2’s flags were accepted 14 per cent more often than human flags during this period, and it helped moderators remove 4.4× as many comments as human flaggers alone," explained Punyon and Montrose in their post.
Darilek said unfriendly comments showed up as a top concern in the company's Site Satisfaction survey. And she said the biz intends to look at its bot's impact on user satisfaction at a later date.
It should be noted that Stack Overflow wants to stop not only deliberate unkindness, but also unintended slights. And not everyone agrees with the biz' approach.
In a pair of comments on the Stack Overflow post, Mark Amery, a developer based in London, expressed skepticism that there's a way to encode polite speech and suggested doing so would punish people of different backgrounds.
"It's one thing to try to codify or automate detection of comments so bad that they should be deleted on sight, but it's quite another to try to codify the fuzzier and more contentious question of what's respectful, and it's another yet again to ask an authority with the power to censor and punish to issue such a codification by decree,"he said. "We've gone over this ground again and again and it's caused nothing but conflict and bitterness; surely we should've learned by now?"
Last year, Stack Overflow faced a minor moderator rebellion over its removal of a popular moderator and controversial Code of Conduct changes, an affair that led to threatened litigation and multiple apologies.
Asked about those who take issue with its approach to moderation, Darilek said there has always been a wide range of perspectives in the community.
"Our policy for 10 years was simply 'Be nice,' and inappropriate comments were flagged by our users and handled by our moderators," said Darilek. "In August 2018, in collaboration with veteran and new users we expanded that 'Be nice' policy into our official Code Of Conduct which has been in effect ever since."
"Our official stance is that subtle put-downs and unfriendly language are unacceptable. That’s our standard and we’re going to hold people to it as best we can. In the past that’s been with human generated flags and now we’ve added machine learning to our toolset."
Asked whether anything is lost by flagging unfriendly comments and whether any effort has been made to look into whether optimizing for friendliness turns anyone away from the site, Darilek dismissed the idea.
"We have looked at content quality and user behavior, and have not seen any negative impact from comment flags," she said, noting that the company collects false positives, as determined by moderators, to improve its flagging system.
"Unfriendliness turns people away," said Darilek. "Our data shows that people who receive the comments flagged by the robot disengage at higher rates and take longer to come back and post again. Qualitatively, unfriendliness is a consistent top pain point we hear from the community." ®