This article is more than 1 year old

Do Facebook's algorithms drive political polarization? Meta says no, but researchers say it's complicated

Plus: Google DeepMind's latest visual-language model robot, and more

AI in brief Four research papers this week concluded that users' political beliefs and behavior don't seem to be all that impacted by information amplified by Facebook's algorithms.

But does that mean the social media platform doesn't contribute to political divisiveness?

Meta believes so. "Although questions about social media's impact on key political attitudes, beliefs, and behaviors are not fully settled, the experimental findings add to a growing body of research showing there is little evidence that key features of Meta's platforms alone cause harmful 'affective' polarization or have meaningful effects on these outcomes," its president  of Global Affairs, former UK politico Nick Clegg, claimed.

But the researchers don't quite agree. "The conclusions of these papers don't support all of those statements," said Talia Stroud, a communications professor at the University of Austin. Stroud, who is helping lead a team of academics probing anonymized Facebook data taken in 2020, was speaking to the Wall Street Journal, and added that Clegg's conclusions were "not the statement we would make."

The papers, published in Nature and Science (here are the links to the two other studies), show that people's political opinions aren't generally swayed by political news being posted and shared on Facebook. 

A total of 16 papers are to be released on the subject. "The research published in these papers won't settle every debate about social media and democracy, but we hope and expect it will advance society's understanding of these issues," Clegg added.

Making robots more robust with vision-language models

Google DeepMind unveiled Robotic Transformer 2 (RT-2), a system that merges computer vision, natural language processing, and robotics, to make robots more robust.

Robots running RT-2 are able to complete a narrow set of predefined tasks such as picking objects up and moving them around, but they are able to carry them out in a more generalized manner. The vision-language model (VLM) allows the gripper to perform tasks on new objects and can interpret commands that it hasn't explicitly come across in its training data.

Robotic data trains the robot's movements so it can manipulate objects, and the VLM allows it to recognize objects it hasn't seen before, allowing it to understand "pick up the bag about to fall off the table" or "move the banana to the sum of two plus one," for example. 

"RT-2 shows improved generalization capabilities and semantic and visual understanding beyond the robotic data it was exposed to. This includes interpreting new commands and responding to user commands by performing rudimentary reasoning, such as reasoning about object categories or high-level descriptions," DeepMind explains in a blog post

RT-2 works a little bit like how large language models do, but instead of predicting the next word in a sentence, it predicts the next action it should take.

Meta on that 'open-source' language model

Regular Register readers will know we've highlighted concerns about the community license for Meta's Llama 2 large language model. It is billed as open source but there are restrictions on its use. For example, if you have 700 million or more monthly active users of applications powered by Llama 2, you'll need a special license from Meta.

People did wonder if this was an attempt to extract money from big cloud players and others who are hoping to create huge AI-powered platforms. Well, wonder no more. That's exactly why the clause is there.

"If you're someone like Microsoft, Amazon or Google, and you're going to basically be reselling the services, that's something that we think we should get some portion of the revenue for," Meta's CEO Mark Zuckerberg said during the company's latest earnings call this week.

In other words, the model is technically free for developers to use but not for major cloud providers, who make money when users tap into their computing resources to access it.

Although the software can be used for research and commercial purposes, Meta's license has been criticized for not being truly open source by developers. Releasing a more open model means the company can attract more developers to use its own software over its competitors like OpenAI and Google, and now it gets to make money off of it, too. 

"I don't think that that's going to be a large amount of revenue in the near-term, but over the long term, hopefully that can be something," Zuckerberg said. ®

More about

TIP US OFF

Send us news


Other stories you might like