Hillary Clinton may have the most human supporters among those running for the US presidency, but Donald Trump has an edge among automatons.
Pro-Trump Twitter hashtags from September 26 – the date of the first US presidential debate – through September 29 outnumbered pro-Clinton hashtags by about two-to-one, according to a study conducted by researchers from Corvinus University of Budapest, Oxford University, and the University of Washington.
About a third of the pro-Trump traffic originated from bots and automated accounts, compared to about a fifth of pro-Clinton traffic.
The paper, "Bots and Automation over Twitter during the First U.S. Presidential Debate," by Bence Kollanyi, Philip N Howard, and Samuel C Woolley, was released through Political Bots, a project that aims to assess the effect of automated advocacy on public life.
The researchers say it's not clear whether bots have enough effect on public opinion to warrant oversight, but they contend such software has become a potent way to create the appearance of grassroots support.
A bot is software designed to automate a task. Such programs or scripts have long been used to interact with websites, sniping items on eBay, sending spam, or participating in distributed denial of service attacks.
But with the mainstream adoption of social media, the proliferation of social media APIs, and the growing number of frameworks for understanding audio, text, and images, it has become trivial to create an army of chatbots to convey a particular message and to interact with people in a way that may pass for interpersonal conversation.
Microsoft and Facebook have been promoting bot frameworks for marketing and entertainment. But bots turn out to be ideal for parroting political messages – they're always on-message and they're undeterred by inconvenient facts.
"Political actors and governments worldwide have begun using bots to manipulate public opinion, choke off debate, and muddy political issues," the paper explains.
In a phone interview with The Register, Samuel C Woolley, director of research for Political Bots, a fellow at the UW Tech Policy Lab, and a PhD student in the department of communication at the University of Washington, said automation in public discourse can be helpful for disseminating information and providing social scaffolding.
"But in this case, I think the difference is there's a lack of transparency," said Woolley. The bots, he said, "are mimicking citizens or voters. There's a subtle manipulation of public opinion here."
Woolley contends that many people don't realize that they may be interacting with bots. "When people find out ... they're really shocked this is going on," he said. "It seems to be a turnoff for people."
The risk for Twitter and other social media companies is that bots come to dominate online discourse and drive people away. Online harassment, driven by people as well as bots, has already affected Twitter participation among journalists and activists in Mexico and Turkey, said Woolley.
Woolley argues that the media needs to understand how bots can sway online data like polls and needs to be skeptical of Twitter data. "Twitter metrics are massively manipulated by bots," he said.
Observing that some bots can be funny or entertaining, Woolley said he doesn't want to see bots banned completely. "The burden is now on the platforms [to police bots], because the legislation isn't there," he said, noting that internet companies may have legal reasons for not wanting to police content.
But in a medium where participation generally translates into ad revenue, bots may continue to get a long leash. As Woolley put it, "Bots drive up metrics." ®