This article is more than 1 year old

If scammers use your AI code to rip off victims, the FTC may want a word

A good watchdog does blame the tools, or something like that

America's Federal Trade Commission has warned it may crack down on companies that not only use generative AI tools to scam folks, but also those making the software in the first place, even if those applications were not created with that fraud in mind. 

Last month, the watchdog tut-tutted at developers and hucksters overhyping the capabilities of their "AI" products. Now the US government agency is wagging its finger at those using generative machine-learning tools to hoodwink victims into parting with their cash and suchlike as well as the people who made the code to begin with.

Commercial software and cloud services, as well as open source tools, can be used to churn out fake images, text, videos, and voices on an industrial scale, which is all perfect for cheating marks. Picture adverts for stuff featuring convincing but faked endorsements by celebrities; that kind of thing is on the FTC's radar.

"Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals," Michael Atleson, an attorney for the FTC's division of advertising practices, wrote in a memo this week.

"The FTC Act's prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that's not its intended or sole purpose."

And to be clear, there are no new rules or regulations at play here: it's just the FTC doing its usual thing of reminding people that today's tech fads are still covered by consumer protection laws, in the US at least.

Atleson highlighted the following scenarios that the FTC will find problematic:

Making generative AI: The legal eagle questioned whether we need ML models capable of producing content so realistic that it would fool people. "If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable – and often obvious – ways it could be misused for fraud or cause other harm," he noted. "Then ask yourself whether such risks are high enough that you shouldn't offer the product at all."

Small models of workmen cleaning up a computer motherboard with brushes

We read OpenAI's risk study. GPT-4 is not toxic ... if you add enough bleach

SEE ALSO

Atleson also urged developers to take all possible steps before the launch of a generative AI model to slash the risk of the software being used to con victims. He also warned against relying on detection engines to pick up abusive use of the technology, as these detectors can be overcome and sidestepped by smart miscreants.

"The burden shouldn't be on consumers, anyway, to figure out if a generative AI tool is being used to scam them," he added.

Finally, he reminded everyone that scamming people using AI models is still scamming:

If you’re an advertiser, you might be tempted to employ some of these tools to sell, well, just about anything. Celebrity deepfakes are already common, for example, and have been popping up in ads. We’ve previously warned companies that misleading consumers via doppelgängers, such as fake dating profiles, phony followers, deepfakes, or chatbots, could result – and in fact have resulted – in FTC enforcement actions.

To us, it all boils down to: breaking the law using some new-fangled model is still breaking the law. And if you just make tools that aid this kind of crime, don't think you're somehow immune from prosecution. ®

Apropos of nothing... Firefox maker Mozilla announced this week Mozilla.ai, a startup with $30 million in funding that's aiming to build "a trustworthy, independent, and open-source AI ecosystem."

More about

TIP US OFF

Send us news


Other stories you might like