This article is more than 1 year old

Twitter's algos favour tweets from conservatives over liberals because they generate more outrage online – study

Plus: Microsoft acquires an AI content moderation startup to prevent hate speech on the Xbox and more

In brief Twitter’s algorithms are more likely to amplify right-wing politicians than left-wing ones because their tweets generate more outrage, according to a trio of researchers from New York University’s Center for Social Media and Politics.

Last week, the social media platform’s ML Ethics, Transparency and Accountability (META) unit published research that showed users were more likely to see posts from right-wing elected officials across six countries - including the UK and the US - than their left-wing counterparts. Twitter said it didn’t know why its algorithms behaved this way.

Political scientists from NYU, however, have been conducting their own research into Twitter’s algorithms and they believe it's because tweets from conservative politicians are more controversial and attract more attention. They have analyzed the number of retweets from tweets made by Congress members of the Republican and Democratic party since January 2021 and found the same pattern Twitter’s engineers did.

“Why would Twitter’s algorithms promote conservative politicians? Our research suggests an unlikely but plausible reason: It’s because they get dunked on so much,” they wrote in a op-ed in the Washington Post. Twitter users are more likely to react and retweet their posts, which means these posts are more likely to end up on people’s timelines.

Microsoft snaps up AI content moderation startup

Microsoft announced it has acquired Two Hat, a company focused on building automated tools to moderate content online, to prevent hate speech spreading for communities on Xbox, Minecraft and MSN.

Both companies have been working together for a few years already. The takeover amount was not disclosed. They'll work together to incorporate and roll out Two Hat's tools on Microsoft's applications over the cloud. Two Hat will continue to work with its existing customers under Microsoft.

"We understand the complex challenges organizations face today when striving to effectively moderate online communities," Dave McCarthy, corporate VP of Xbox Product Services, said in a statement. "In our ever-changing digital world, there is an urgent need for moderation solutions that can manage online content in an effective and scalable way."

"With this acquisition, we will help global online communities to be safer and inclusive for everyone to participate, positively contribute and thrive."

Is GitHub Copilot taking off?

Up to 30 per cent of new code being uploaded to GitHub by developers was written with the help of its AI pair-programming tool Copilot for some languages, apparently.

It’s hard to gauge how popular Copilot is with users because the Axios report doesn't provide much detail. It’s unclear what coding languages were used the most with Codex (the basis of Copilot) and the time period in which the code was submitted is not obvious. Was it over the last month? Three months?

“We hear a lot from our users that their coding practices have changed using Copilot," Oege de Moor, VP of GitHub Next, said. "Overall, they're able to become much more productive in their coding."

Copilot works by suggesting lines of code as you type like autocomplete trying to complete sentences. It was built using OpenAI’s Codex model, a GPT-3-like transformer-based system, trained on billions of lines of code scraped from GitHub instead of text from the internet. It seems to be effective when developers are writing simple template blocks of code but struggles when scripts become more specialized.

Intel’s Gaudi chips now available via AWS

Intel’s AI training chips (known as Gaudi) that were built by Habana Labs, the Israeli startup biz Chipzilla acquired in 2019, are now generally available on AWS as a new type of cloud instance.

These DL1 instances run on eight Gaudi accelerators providing 256GB of high-bandwidth memory, 768GB of onboard memory, and work in tandem with 2nd generation Amazon custom Intel Xeon Scalable (Cascade Lake) processors. They also include 400GB per second of networking throughput and up to 4TB of local NVMe storage.

The cost to run these DL1 instances on your machine to train AI models and whatnot will set you back about $13 if you’re in the US East or US West regions.

“The use of machine learning has skyrocketed. One of the challenges with training machine learning models, however, is that it is computationally intensive and can get expensive as customers refine and retrain their models,” said David Brown, Vice President, of Amazon EC2, at AWS.

“The addition of DL1 instances featuring Gaudi accelerators provides the most cost-effective alternative to GPU-based instances in the cloud to date. Their optimal combination of price and performance makes it possible for customers to reduce the cost to train, train more models, and innovate faster.” ®

More about

TIP US OFF

Send us news


Other stories you might like