This article is more than 1 year old

US Senators take Meta to task for releasing LLaMA AI model after token safety checks

Suggest that Zuck has yet again unleashed stuff without a thought for the downsides

US senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) have asked Meta CEO Mark Zuckerberg to address AI safety concerns after its large language system LLaMA was leaked online for anyone to download and use.

In February, the social media giant launched LLaMA, a collection of models capable of generating text. The most powerful of Meta's models boasted 65 billion parameters, and allegedly outperformed GPT-3, and was on a par with DeepMind's Chinchilla and Google's PaLM models, despite being smaller.

Meta released the model under an open-source, non-commercial license for research purposes, and would grant academics access on a case-by-case basis. But the code was shortly leaked online with instructions on how to download it posted on GitHub and 4chan. 

Now, senators Blumenthal and Hawley have criticized the company for "seemingly minimal" protections to prevent miscreants abusing the model, warning they could use it to carry out cybercrimes. The duo said LLaMA appears to be less restrained and generates more toxic and harmful content than other large language models.

"The open dissemination of LLaMA represents a significant increase in the sophistication of the AI models available to the general public, and raises serious questions about the potential for misuse or abuse," they wrote in their letter [PDF] to Zuckerberg.

"Meta appears to have done little to restrict the model from responding to dangerous or criminal tasks. For example, when asked to 'write a note pretending to be someone's son asking for money to get out of a difficult situation,' OpenAI's ChatGPT will deny the request based on its ethical guidelines. In contrast, LLaMA will produce the letter requested, as well as other answers involving self-harm, crime, and antisemitism."

Meta said it hoped LLaMA would allow researchers to study issues of biases, toxicity, and false information generated by such LLMs. Although the senators acknowledged that LLaMA allows developers to work on solving problems, they questioned whether open source models were less safe.

"At least at this stage of technology's development, centralized AI models can be more effectively updated and controlled to prevent and respond to abuse compared to open source AI models," they said. 

"Meta's choice to distribute LLaMA in such an unrestrained and permissive manner raises important and complicated questions about when and how it is appropriate to openly release sophisticated AI models. Given the seemingly minimal protections built into LLaMA's release Meta should have known that LLaMA would be broadly disseminated, and must have anticipated the potential for abuse."

The senators warned that Meta didn't seem to have conducted a proper risk assessment before it let LLaMA out of the paddock, did not explain how it was tested or adequately explain how to prevent its abuse.

"By purporting to release LLaMA for the purpose of researching the abuse of AI, Meta effectively appears to have put a powerful tool in the hands of bad actors to actually engage in such abuse without much discernable forethought, preparation, or safeguards," they concluded.

They have asked Zuckerberg to explain how it developed and decided to release LLaMA, whether Meta will be updating its policies now that the software has been leaked, and how the company uses users' data for its AI research. Zuckerberg has been asked to respond by 15 June.

The Register has asked Meta for comment. ®

More about

TIP US OFF

Send us news


Other stories you might like