This article is more than 1 year old
Cause for a LLaMA? Meta reckons its smaller text-emitting AI is better than rivals
Plus: Lensa AI app maker sued for wacky app photo collection, Supreme Court debates chatbots
In brief Meta released its Large Language Model Meta AI, torturously dubbed LLaMA, which promises to perform just as well if not better than similar systems containing billions of more parameters.
Large language models are all the rage right now. Microsoft and Google, to name two, are competing to build the best models to boost their search engines. These systems are general purpose and have been used for more than web search, such as studying proteins or discovering drugs.
These models are computationally intensive, making it challenging and costly for developers to train and run them, as well as making it difficult for researchers to study these systems.
LLaMA, however, is smaller and contains up to 65 billion parameters, potentially making it easier to use. It's just as powerful, if not more, than other leading language models Meta's Chief AI Scientist Yann LeCun claimed. He said LLaMa is just as good as the largest language models, including Google's 540-billion-parameter PaLM.
And speaking of other models, Meta argued those systems are pretty much kept out of reach of people to properly peer into and scrutinize, which holds back the understanding, improvement, and evaluation of machine-learning software.
"This restricted access has limited researchers' ability to understand how and why these large language models work, hindering progress on efforts to improve their robustness and mitigate known issues, such as bias, toxicity, and the potential for generating misinformation," Meta – which knows all about toxicity and misinformation – said on Friday.
Meta instead plans to make its tech a little more available for people to study; it says it will release its language model under a non-commercial license to developers for research purposes on a case-by-case basis. You can read a technical paper here about the technology.
"We believe that the entire AI community — academic researchers, civil society, policymakers, and industry — must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular. We look forward to seeing what the community can learn — and eventually build — using LLaMA," Meta concluded.
Lensa AI app creator sued in BIPA class-action lawsuit
The maker of the viral AI avatar app Lensa broke Illinois' Biometric Information and Privacy Act by collecting and storing people's photos without their consent, a class-action lawsuit against the biz alleges.
Lensa AI skyrocketed to the top of app stores late last year as people took to creating AI-generated pictures based on their own selfies. Millions have downloaded the app to use Magic Avatar, a feature that uses AI algorithms to produce images portraying people's faces in a cartoonish and colorful way.
While this may seem like a bit of harmless fun, a group of residents of the US state of Illinois claim the company that built the app, Prisma Labs, collected and stored images from its users without explicit permission to train its AI systems. In order to create a magic avatar, people had to give the company not only eight different photographs but access to their whole photo library, too.
"The plaintiffs in this lawsuit, and millions of others like them, unwittingly provided their facial biometrics to Prisma Labs when they downloaded the Lensa app," said Tom Hanson, an attorney at Loevy & Loevy representing the plaintiffs. "A person's facial geometry is like their fingerprint: it is an immutable, unique identifier that deserves the highest degree of protection the law can afford."
The Register has asked Prisma Labs for comment.
Running Bing and Bard will cost Microsoft and Google billions
AI-powered search chatbots are eye-wateringly expensive to run and will likely cost Google and Microsoft billions of dollars in compute. MIPS chip godfather John Hennessy, who today is chairman of Google's parent biz Alphabet, told Reuters web search operations using these new chatbots could cost ten times more than a standard query typed into a web browser.
Analysts at Morgan Stanley estimated that if Google's upcoming Bard chatbot were to process half of the 3.3 trillion search queries the company typically receives in one year, it'd cost the company $6 billion in operating expenses to generate 50-word responses to each question.
Microsoft is also feeling the pinch with its new Bing model no doubt. The company's Chief Financial Officer, Amy Hood, however, said the cost is worth it since Bing attracts users and advertising revenue. "That's incremental gross margin dollars for us, even at the cost to serve that we're discussing," she said.
Voice-based bank login cracked using AI-cloned audio
Security experts are sounding the alarm bells over voice-based authentication used to access private accounts as AI can now convincingly clone the sound of someone's voice and log into those profiles.
In an experiment, Vice reporter Joseph Cox got into his own online bank account by playing audio clips generated by AI software that mimicked his voice, as demonstrated in the video below...
New: we proved it could be done. I used an AI replica of my voice to break into my bank account. The AI tricked the bank into thinking it was talking to me. Could access my balances, transactions, etc. Shatters the idea that voice biometrics are foolproof https://t.co/YO6m8DIpqR pic.twitter.com/hsjHaKqu2E
— Joseph Cox (@josephfcox) February 23, 2023
He gained access into his own account without actually using his real voice, using off-the-shelf tech. It's now easy and cheap to clone people's voices using AI, especially if there are a lot of audio samples to fine-tune software. Celebrities, politicians, and other types of public figures could be easier to target in just this way.
Rachel Tobac, CEO of social engineering focused firm SocialProof Security, told Vice's Motherboard: "I recommend all organizations leveraging voice 'authentication' switch to a secure method of identity verification, like multi-factor authentication, ASAP." She warned that voice cloning attacks can be "completed without ever needing to interact with the person in real life."
AI chatbots versus Section 230
The high-profile US Supreme Court case of Gonzalez vs Google has raised a new question: would AI chatbots be protected under America's Section 230 law?
Section 230, in the vast majority of cases, protects online platforms from being legally responsible for content posted by their users. If a user asks an AI chatbot a question and it replies saying something legally dubious, could its creators be sued?
Justice Neil Gorsuch brought this up during a hearing for the ongoing Gonzalez case.
"Artificial intelligence generates poetry," he said. "It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected. Let's assume that's right. Then the question becomes, what do we do about recommendations?"
The case continues. ®