Twitter's algos favour tweets from conservatives over liberals because they generate more outrage online – study

Plus: Microsoft acquires an AI content moderation startup to prevent hate speech on the Xbox and more


In brief Twitter’s algorithms are more likely to amplify right-wing politicians than left-wing ones because their tweets generate more outrage, according to a trio of researchers from New York University’s Center for Social Media and Politics.

Last week, the social media platform’s ML Ethics, Transparency and Accountability (META) unit published research that showed users were more likely to see posts from right-wing elected officials across six countries - including the UK and the US - than their left-wing counterparts. Twitter said it didn’t know why its algorithms behaved this way.

Political scientists from NYU, however, have been conducting their own research into Twitter’s algorithms and they believe it's because tweets from conservative politicians are more controversial and attract more attention. They have analyzed the number of retweets from tweets made by Congress members of the Republican and Democratic party since January 2021 and found the same pattern Twitter’s engineers did.

“Why would Twitter’s algorithms promote conservative politicians? Our research suggests an unlikely but plausible reason: It’s because they get dunked on so much,” they wrote in a op-ed in the Washington Post. Twitter users are more likely to react and retweet their posts, which means these posts are more likely to end up on people’s timelines.

Microsoft snaps up AI content moderation startup

Microsoft announced it has acquired Two Hat, a company focused on building automated tools to moderate content online, to prevent hate speech spreading for communities on Xbox, Minecraft and MSN.

Both companies have been working together for a few years already. The takeover amount was not disclosed. They'll work together to incorporate and roll out Two Hat's tools on Microsoft's applications over the cloud. Two Hat will continue to work with its existing customers under Microsoft.

"We understand the complex challenges organizations face today when striving to effectively moderate online communities," Dave McCarthy, corporate VP of Xbox Product Services, said in a statement. "In our ever-changing digital world, there is an urgent need for moderation solutions that can manage online content in an effective and scalable way."

"With this acquisition, we will help global online communities to be safer and inclusive for everyone to participate, positively contribute and thrive."

Is GitHub Copilot taking off?

Up to 30 per cent of new code being uploaded to GitHub by developers was written with the help of its AI pair-programming tool Copilot for some languages, apparently.

It’s hard to gauge how popular Copilot is with users because the Axios report doesn't provide much detail. It’s unclear what coding languages were used the most with Codex (the basis of Copilot) and the time period in which the code was submitted is not obvious. Was it over the last month? Three months?

“We hear a lot from our users that their coding practices have changed using Copilot," Oege de Moor, VP of GitHub Next, said. "Overall, they're able to become much more productive in their coding."

Copilot works by suggesting lines of code as you type like autocomplete trying to complete sentences. It was built using OpenAI’s Codex model, a GPT-3-like transformer-based system, trained on billions of lines of code scraped from GitHub instead of text from the internet. It seems to be effective when developers are writing simple template blocks of code but struggles when scripts become more specialized.

Intel’s Gaudi chips now available via AWS

Intel’s AI training chips (known as Gaudi) that were built by Habana Labs, the Israeli startup biz Chipzilla acquired in 2019, are now generally available on AWS as a new type of cloud instance.

These DL1 instances run on eight Gaudi accelerators providing 256GB of high-bandwidth memory, 768GB of onboard memory, and work in tandem with 2nd generation Amazon custom Intel Xeon Scalable (Cascade Lake) processors. They also include 400GB per second of networking throughput and up to 4TB of local NVMe storage.

The cost to run these DL1 instances on your machine to train AI models and whatnot will set you back about $13 if you’re in the US East or US West regions.

“The use of machine learning has skyrocketed. One of the challenges with training machine learning models, however, is that it is computationally intensive and can get expensive as customers refine and retrain their models,” said David Brown, Vice President, of Amazon EC2, at AWS.

“The addition of DL1 instances featuring Gaudi accelerators provides the most cost-effective alternative to GPU-based instances in the cloud to date. Their optimal combination of price and performance makes it possible for customers to reduce the cost to train, train more models, and innovate faster.” ®

Broader topics


Other stories you might like

  • New audio server Pipewire coming to next version of Ubuntu
    What does that mean? Better latency and a replacement for PulseAudio

    The next release of Ubuntu, version 22.10 and codenamed Kinetic Kudu, will switch audio servers to the relatively new PipeWire.

    Don't panic. As J M Barrie said: "All of this has happened before, and it will all happen again." Fedora switched to PipeWire in version 34, over a year ago now. Users who aren't pro-level creators or editors of sound and music on Ubuntu may not notice the planned change.

    Currently, most editions of Ubuntu use the PulseAudio server, which it adopted in version 8.04 Hardy Heron, the company's second LTS release. (The Ubuntu Studio edition uses JACK instead.) Fedora 8 also switched to PulseAudio. Before PulseAudio became the standard, many distros used ESD, the Enlightened Sound Daemon, which came out of the Enlightenment project, best known for its desktop.

    Continue reading
  • VMware claims 'bare-metal' performance on virtualized GPUs
    Is... is that why Broadcom wants to buy it?

    The future of high-performance computing will be virtualized, VMware's Uday Kurkure has told The Register.

    Kurkure, the lead engineer for VMware's performance engineering team, has spent the past five years working on ways to virtualize machine-learning workloads running on accelerators. Earlier this month his team reported "near or better than bare-metal performance" for Bidirectional Encoder Representations from Transformers (BERT) and Mask R-CNN — two popular machine-learning workloads — running on virtualized GPUs (vGPU) connected using Nvidia's NVLink interconnect.

    NVLink enables compute and memory resources to be shared across up to four GPUs over a high-bandwidth mesh fabric operating at 6.25GB/s per lane compared to PCIe 4.0's 2.5GB/s. The interconnect enabled Kurkure's team to pool 160GB of GPU memory from the Dell PowerEdge system's four 40GB Nvidia A100 SXM GPUs.

    Continue reading
  • Nvidia promises annual updates across CPU, GPU, and DPU lines
    Arm one year, x86 the next, and always faster than a certain chip shop that still can't ship even one standalone GPU

    Computex Nvidia's push deeper into enterprise computing will see its practice of introducing a new GPU architecture every two years brought to its CPUs and data processing units (DPUs, aka SmartNICs).

    Speaking on the company's pre-recorded keynote released to coincide with the Computex exhibition in Taiwan this week, senior vice president for hardware engineering Brian Kelleher spoke of the company's "reputation for unmatched execution on silicon." That's language that needs to be considered in the context of Intel, an Nvidia rival, again delaying a planned entry to the discrete GPU market.

    "We will extend our execution excellence and give each of our chip architectures a two-year rhythm," Kelleher added.

    Continue reading
  • Amazon puts 'creepy' AI cameras in UK delivery vans
    Big Bezos is watching you

    Amazon is reportedly installing AI-powered cameras in delivery vans to keep tabs on its drivers in the UK.

    The technology was first deployed, with numerous errors that reportedly denied drivers' bonuses after malfunctions, in the US. Last year, the internet giant produced a corporate video detailing how the cameras monitor drivers' driving behavior for safety reasons. The same system is now apparently being rolled out to vehicles in the UK. 

    Multiple camera lenses are placed under the front mirror. One is directed at the person behind the wheel, one is facing the road, and two are located on either side to provide a wider view. The cameras are monitored by software built by Netradyne, a computer-vision startup focused on driver safety. This code uses machine-learning algorithms to figure out what's going on in and around the vehicle.

    Continue reading
  • AWS puts latest homebrew ‘Graviton 3’ Arm CPU in production
    Just one instance type for now, but cheaper than third-gen Xeons or EPYCs

    Amazon Web Services has made its latest homebrew CPU, the Graviton3, available to rent in its Elastic Compute Cloud (EC2) infrastructure-as-a-service offering.

    The cloud colossus launched Graviton3 at its late 2021 re:Invent conference, revealing that the 55-billion-transistor device includes 64 cores, runs at 2.6GHz clock speed, can address DDR5 RAM and 300GB/sec max memory bandwidth, and employs 256-bit Scalable Vector Extensions.

    The chips were offered as a tech preview to select customers. And on Monday, AWS made them available to all comers in a single instance type named C7g.

    Continue reading

Biting the hand that feeds IT © 1998–2022