This article is more than 1 year old

If you're cautious about using ML and bots at work, that's not a bad idea

Alex Stamos: 'We don't really know what's gonna go wrong with AI yet'

DataGrail Summit Generative AI is uncharted territory, and those wishing to explore it need to be aware of the dangers, privacy shop DataGrail warned at its summit this week in San Francisco.

Pretty much every business using technology is facing mounting pressure to exploit GenAI – and many fear that if they fall behind the trend, they could lose to more innovative competitors who have built some tool or product that will give them a competitive advantage. But along with its benefits generative AI introduces all sorts of issues that businesses don't yet understand or know how to solve.

Large language models can divulge sensitive, private information swept up in their training data, for example. In order to perform tasks like answering queries, summarizing documents, or writing code, they have to ingest huge amounts of content. 

Law firms, banks, or hospitals will be training models on confidential data containing personal identifiable information, financial details, and health records. How can they protect customer information and their own proprietary intellectual property from accidentally leaking?

A developer at Samsung actually fed ChatGPT proprietary source code hoping it could help to find bugs. Executives at Apple, JPMorgan Chase, and Amazon have since banned workers from using ChatGPT and tools like it internally over concerns that the software's creator OpenAI could train on their data and expose their trade secrets.

"It's a little bit like the early days of software security," Alex Stamos, co-founder of the Krebs Stamos Group, a cybersecurity consultancy, and former chief security officer at Facebook, told The Register at the summit.

"You know, in the early 2000s, you'd have companies that had a line of business apps or even product teams with no centralized security control.

"And we're kind of back there with AI. If you ask the board of directors 'what do you think your risks are?' they have no idea. Or if they ask their executives to tell them what's going on, they can almost never get an accurate answer."

Developers of generative AI models do offer a hope of slightly better security and privacy for enterprises – OpenAI promised to encrypt and not train on text in conversations between its chatbot and ChatGPT Enterprise customers, for example. But this doesn't solve other types of problems with the technology, such as its tendency to generate false information – a phenomenon known as "hallucination." 

Joshua Browder, CEO of DoNotPay – a startup that has developed a so-called "robot lawyer" to automatically help consumers do things like contest parking tickets by using AI to draft emails or official complaints – said factual errors will make some services ineffective.

"Who's responsible for the hallucinations? I think that's the biggest thing," he told The Register. "Imagine if you're trying to appeal a charge on your internet bill and [the AI] said there were five service outages when there weren't. If you're using a tool and it lies, who's legally responsible for that? Is it the model provider? Or is it a company using it? Is it the consumer?"

Worse still, chatbots could end up spewing inappropriate responses that could be toxic or biased, which could damage the reputations of businesses using them for customer interaction. Safety measures like content filters can block swear words or harmful requests, but they don't always work.

"We don't really know what's gonna go wrong with AI yet," Stamos said. "It's like the '90s, when the basic vulnerabilities that exist in software weren't even discovered yet. We would have these situations where every year there'd be a BlackHat talk or one at DEF CON, that would totally revolutionize the industry, because all of a sudden, one person's research into one kind of flaw was found to be applicable to thousands of different products.

"And that's where we are with AI that you can come up with a new way to manipulate the system to do something harmful or break its promises and you can't prepare for that, because nobody knew that that kind of problem existed before.

"The other problem is these systems are so incredibly complicated – very few people understand how they actually work. These things are so non-deterministic, and so complex, that trying to predict what bad things can happen is impossible."

Finally, there are other risks in using generative AI – like copyright concerns. Many tools are trained on data scraped from the internet, and content creators are afraid they could get sued for generating content based on protected works – like books, songs, or art. Earlier this month, Microsoft declared it would defend paying customers if they face any copyright lawsuits for using its Copilot tools.

Content generated by AI cannot be legally protected under current copyright laws in the US. Can a developer patent an invention if it contains code that was generated by a large language model? Coca-Cola recently released a soft drink flavor that was reportedly created by AI. The recipe is a secret – but if another company were able to replicate it, it would it be OK to sell?

These conundrums are difficult to solve. There are too many unknowns right now, and they touch on different departments within organizations that might not be used to working together – like legal, compliance, marketing, sales, and IT. The important thing is to be aware that the hazards are out there – and be ready when they emerge. ®

More about

TIP US OFF

Send us news


Other stories you might like