The FCC wants to criminalize AI robocall spam
The only thing worse than a telemarketer is a robo-telemarketer
The FCC wants to make AI-powered robocalls illegal and has warned of a rising wave of scams from voice-cloning technology.
On Thursday, Jessica Rosenworcel, boss of America's communications watchdog, said machine-learning software could potentially convince people to do things like donating money to fraudulent causes.
It's one thing for humans to con other humans, it's another for computers to imitate celebrities and others while spam-calling people at an industrial scale to swindle them with convincing lures.
Last month, New Hampshire residents received a fake call mimicking President Joe Biden's voice telling them to not vote in the state's Presidential primary election in an attempt to disrupt results.
"AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate. No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls," Rosenworcel said this week.
"That's why the FCC is taking steps to recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers."
The agency believes the use of AI voice cloning technology in robocall scams should be criminalized under the 1991 Telephone Consumer Protection Act. Current laws require telemarketers to have explicit consent from consumers before they can make automated calls using "an artificial or prerecorded" voice. Rosenworcel believes the same rules should apply for AI-generated robo-calls too.
- FCC probes rise of AI robocall armies
- Robocall scammers sentenced in US after netting $1.2M via India-based call centers
- FTC sues VoIP provider over 'billions of illegal robocalls'
- FCC calls for mega $300 million fine for massive US robocall campaign
Lawmakers also want to tackle the issue. House Rep Frank Pallone, Jr (D-NJ) this week introduced a bill, called the Do Not Disturb Act, that would require telemarketers to disclose whether AI has been used to automatically craft a message used in a text message or phone call.
Penalties would apply if telemarketers impersonate someone. Pallone said that his draft legislation closes loopholes and expands robocall rules to cover text messages and the use of AI.
Government officials across the US support the FCC's ideas. Last month, Pennsylvania's Attorney General Michelle Henry wrote a letter [PDF] to the FCC and said that AI-generated voices should be classified as artificial voices, meaning that the TCPA should already protect against robocall scams made using voice cloning technologies. The letter was signed by 25 attorneys general from other states.
"Technology is advancing and expanding, seemingly, by the minute, and we must ensure these new developments are not used to prey upon, deceive, or manipulate consumers," Henry said.
"This new technology cannot be used as a loophole to barrage consumers with illegal calls. I commend the partners in this bipartisan coalition for seeing the potential harm AI can present to consumers already overwhelmed by robocalls and text communications." ®