US watchdog gives itself power to demand documents in AI probes
And offers $25,000 prize to stop voice deepfakes from catching on
The US Federal Trade Commission has given itself the power to use compulsory processes – a mechanism that allows it to demand access to documents – for investigations of products and services that use or claim to be powered by AI.
Officials voted 3–0 to approve a resolution allowing Commission staff to issue civil investigative demands (CIDs) – an instrument similar to a subpoena – during probes into corporate AI activity. While the FTC will retain the right to direct which companies are investigated, staffers will be able to request documents and interviews much more quickly.
The power, which will be in place for ten years, is hoped to speed investigations in recognition that AI is likely to be important in future trade regulation cases.
"Although AI, including generative AI, offers many beneficial uses, it can also be used to engage in fraud, deception, infringements on privacy, and other unfair practices, which may violate the FTC Act and other laws," the Commission argued. "At the same time, AI can raise competition issues in a variety of ways, including if one or just a few companies control the essential inputs or technologies that underpin AI."
- FTC interrupts Copyright Office probe to flip out over potential AI fraud, abuse
- Feds want to see what ChatGPT's content is made of
- Amazon Ring, Alexa accused of every nightmare IoT security fail you can imagine
The antitrust and consumer protection agency is increasingly interested in AI. Last week it launched the "FTC Voice Cloning Challenge," to find the best ideas of how to stop this increasingly common scam.
Miscreants can abuse AI algorithms to mimic a target's voice to access sensitive information – such as their bank accounts – or deceive their families, friends, or companies in extortion scams. The FTC is looking for multidisciplinary approaches that involve building products that can detect the AI-generated audio, or coming up with policies and procedures to tackle the issue.
"This isn't a techno-solutionist approach or a call for self-regulation. We hope to generate multidisciplinary tools to prevent harms, and we will continue to enforce the law," the agency wrote upon launching the Challenge.
The winners of the competition will score $25,000, while the runners up will receive $4,000 and up to three "honourable mentions" will pocket $2,000. "If viable ideas do not emerge," cautioned the FTC, "this will send a critical and early warning to policymakers that they should consider stricter limits on the use of this technology, given the challenge in preventing harmful development of applications in the marketplace."
The FTC has repeatedly issued stern warnings that it will crack down on developers building or claiming to use AI if their products are used to defraud, deceive, or manipulate consumers. It is, for example, looking into OpenAI's ChatGPT software to see if it might contravene any consumer protection laws related to data privacy or reputational harm. ®