This article is more than 1 year old
Oxford profs tell Twitter, Facebook to take action against political bots
It's just the future of democracy at stake, no biggie
The use of algorithms and bots to spread political propaganda is "one of the most powerful tools against democracy", top academics have warned.
A team led by professors at the Oxford Internet Institute analysed tens of millions of posts on seven social media platforms in nine countries, including the US, Russia and Germany, during elections and political crises.
They were looking at computational propaganda, which is defined as the way algorithms, automation and human curation are used to purposefully distribute misinformation on social media networks.
The work concludes that the problem is real and widespread, with both governments and activists being responsible.
However, the report's authors add that, although social media firms may not be producing the content, they need to take action against it.
"Computational propaganda is now one of the most powerful tools against democracy," lead authors Samuel Woolley and Phil Howard write. "Social media firms may not be creating this nasty content, but they are the platform for it. They need to significantly redesign themselves if democracy is going to survive social media."
The analyses point out that the way that such propaganda is used differs depending on the system of government.
In authoritarian countries, it said, social media platforms are a primary means of social control, while in democracies the platforms will be used by different actors to try to influence public opinion.
"Regimes use political bots, built to look and act like real citizens, in an effort to silence opponents and push official state messaging," the report said.
Meanwhile, "political campaigns, and their supporters, deploy political bots – and computational propaganda more broadly – during elections in attempts to sway the vote or defame critics" and run coordinated disinformation campaigns and troll opponents.
The techniques appear to be working – the network analysis of US social platforms showed that bots "reached positions of measurable influence during the 2016 US election".
By infiltrating cores and the "upper echelons of influence", the computational propaganda – and bots – had a "significant influence on digital communication" during the election.
According to the researchers, many social media platforms are "fully controlled by or dominated by governments and disinformation campaigns" – the Russia report found that 45 per cent of Twitter activity in the state is managed by highly automated accounts.
But Ukraine has "perhaps the most globally advanced case of computational propaganda", the academics said, with numerous campaigns having been waged on Facebook, Twitter and the European network VKontakte since the early 2000s.
There are also cases of authoritarian governments truing to influence the political agenda in other countries: Chinese-directed campaigns have targeted Taiwan, while Russian-directed ones have targeted Poland and Ukraine.
In contrast to these countries, Germany is taking an overtly cautious approach to dealing with computational propaganda.
"All of the major German parties have positioned themselves in the debate surrounding bots and committed to refrain from using them in campaigning," writes Lisa-Maria Neudert, author of the German report.
The country has taken regulatory measures within existing legal frameworks, while a new law that would hold social networks liable for computational propaganda on their platforms has been proposed.
However, she said that the debate "lacks conceptual clarity", with prevailing misconceptions and confusion about the terminologies used during discussions.
On top of analysing individual cases, the research looked at the broader challenges facing further investigation of computational propaganda.
This included "sleeper bots" – active bot networks that fall below the formal threshold to be classed as an active bot – and that the people aiming to influence the agenda are getting wise to such analyses.
"We have found that political actors are adapting their automation in response to our research," the report reads. "This suggests that the campaigners behind fake accounts and the people doing their 'patriotic programming' are aware of the negative coverage that this gets in the news media."
There has been much debate about the influence of fake news and political bots in recent campaigns, both at home and overseas, as well as the related issue of data protection.
Last month, the Information Commissioner's Office announced it was launching a probe into the way political parties used voters' personal information to run targeted campaigns. ®