EFF wants FTC to treat lying chatbots as 'unfair and deceptive' in eyes of the law
And hit AI operators 'with all the fines', says Cory Doctorow
The Electronic Frontier Foundation (EFF) has proposed that the FTC apply rules prohibiting unfair trade practices to punish those operating deceptive chatbots.
Author and activist Cory Doctorow, special advisor to the EFF, said the US trade watchdog shouldn't be focusing on copyright law as a way to address the privacy, fairness, and labor issues accompanying generative AI applications. Instead the regulator should rely on its established authority under Section 5 of the Federal Trade Commission Act, which prohibits "unfair or deceptive acts or practices in or affecting commerce."
"The FTC should issue guidance declaring that any company that deploys a chatbot that lies to a customer has engaged in an 'unfair and deceptive practice' that violates Section 5 of the Federal Trade Commission Act, with all the fines and other penalties that entails," Doctorow wrote.
Failure to penalize makers of lying chatbots, he argues, means there's less incentive for operators to invest in making the technology better.
That recommendation, however, looks fraught in light of the US Supreme Court's decision last week, which ended a policy known as Chevron deference that encouraged judges to defer to regulatory agencies for their expert interpretation of ambiguities in the law.
Now judges can decide what government regulators can do when faced with ambiguous or ill-defined law.
The FTC in its own enforcement principles [PDF] acknowledges that its statutory authority isn't precisely defined under Section 5.
"Congress chose not to define the specific acts and practices that constitute unfair methods of competition in violation of Section 5, recognizing that application of the statute would need to evolve with changing markets and business practices," the agency explains.
"Instead, it left the development of Section 5 to the Federal Trade Commission as an expert administrative body, which would apply the statute on a flexible case-by-case basis, subject to judicial review."
The EFF declined to comment on the potential impact of the Supreme Court decision. The FTC also declined to comment. The Register, however, understands that the FTC expects the Supremes' decision will not have much impact on the commission's main work, much of which involves evidentiary proceedings related to mergers and acquisitions. The agency has not relied on a Chevron-based argument in its litigation and the courts have not returned Chevron-based opinions on agency matters.
- Supreme Court orders rethink on Texas, Florida laws banning web moderation
- Brace for new complications in big tech takedowns after Supreme Court upended regulatory rules
- Antitrust cops cry foul over Meta's pay-or-consent ultimatum to Europeans
- FCC wants telcos to carrier unlock cellphones 60 days after activation
Still, now that judges have been tasked with interpreting regulatory powers, US government agencies taking actions not specifically enumerated in applicable laws can expect more frequent challenges from businesses and interest groups.
The tech companies would prefer not to be regulated, but they face a growing number of AI-focused regulations around the world. Last month, the National Conference on State Legislatures said that the 2024 legislative session has seen AI bills introduced in at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C., and AI legislation has been adopted already in six states, Puerto Rico and the Virgin Islands.
The flood of legislative proposals to regulate AI has become significant enough that Santa Clara University law professor Eric Goldman argues the AI industry is doomed. And among industry participants, there is also concern about executive and legislative overreach.
But as Doctorow points out, something needs to be done because generative AI chatbots are already causing problems.
For example, last year an Air Canada passenger researching flights to attend the funeral of his grandmother was told by the airline's chatbot that he could obtain a reduced bereavement fare after purchasing a ticket. But the chatbot misrepresented the airline's policy, leaving the passenger to seek the erroneously promised discount from Canada's Civil Resolution Tribunal aka the small claims court.
The tribunal found "Air Canada did not take reasonable care to ensure its chatbot was accurate," and awarded the passenger CA$812.02.
There have also been reports of chatbots swearing at customers, giving incorrect answers to legal queries, telling businesses to take action that's illegal, and generally not working very well.
Regardless of the regulatory landscape, it may be reality that keeps AI in check. According to a report [PDF] issued last week by Goldman Sachs, AI spending to date "has little to show for it so far beyond reports of efficiency gains among developers." ®