FTC sues five AI outfits – and one case in particular raises questions

From allegations of lying about capabilities to fake reviews. Plus: Biden AI robocaller finally fined $6M

The FTC has made good on its promise to crack down on suspected deceptive AI claims, announcing legal action against five outfits accused of lying about their software's capabilities or using it to break the law. 

Two of the cases have already been settled. The other three are choosing to take the matter to court, the US regulator said. All five were targeted by the consumer watchdog in its so-called Operation AI Comply.

Those three entities that want to take this matter to trial - Ascend Ecom, Ecommerce Empire Builders, and FBA Machine - were all accused of offering some variation of a scheme that promised customers high earnings in exchange for setting up and managing an online storefront that utilized their AI. In each case, the FTC said, the claims were unlawfully false, and victims lost millions after paying for inventory, training, and the use of ready-made e-commerce shops, in the hopes of earnings that never materialized. 

Ascend Ecom, Ecommerce Empire Builders, and FBA Machine each have court orders against them, halting their operations, while proceedings against them continue.

"Using AI tools to trick, mislead, or defraud people is illegal," FTC boss Lina Khan said this week. 

"The FTC's enforcement actions make clear that there is no AI exemption from the laws on the books," she added. "By cracking down on unfair or deceptive practices in these markets, The FTC is ensuring that honest businesses and innovators can get a fair shot and consumers are being protected."

Robot lawyer not actual real lawyer

Of the two organizations that settled their cases with the FTC, you may recognize one of them: DoNotPay, which early last year canceled plans to use an AI to defend a man in court after the outfit's CEO was threatened with jail for practicing law in California without a license. 

A few months after that stunt, DoNotPay was sued by a customer who claimed the robot lawyer wasn't "a robot, a lawyer, nor a law firm," and that its services were generally garbage, with turnaround times being far too long; documents delivered incomplete; and one person inadvertently having to pay a fine they were trying to use the AI to fight. 

That's generally what the FTC concluded too, with the agency alleging DoNotPay couldn't deliver on its promises to generate documents or sue without a lawyer because it "did not conduct testing to determine whether its AI chatbot's output was equal to the level of a human lawyer, and that the company itself did not hire or retain any attorneys." 

DoNotPay settled with the FTC for $193,000, with a pledge to send out letters to all customers who used it between 2021 and 2023 to warn them about its limitations, and a promise to stop advertising its services as a substitute for legal help without evidence. 

The biz told The Register it was glad to have resolved the matter with the FTC, telling us it hadn't admitted any wrongdoing in having done so.

"The complaint relates to the usage of a few hundred customers some years ago (out of millions of people), with services that have long been discontinued," a DoNotPay spokesperson told us.

Commission split on AI testimonial writer

While four cases in the FTC announcement were decided without dissent from the regulator's commissioners, the leadership was split on the matter of Rytr, a startup that sells an AI writing tool. The decision to authorize a complaint against the company was made in a 3-2 vote by the board of commissioners.

Rytr, which offers its machine-learning software for a variety of use cases, at one point allowed users to generate detailed testimonial reviews with its tools, and that's what the FTC takes issue with. 

"Rytr's service generated detailed reviews that contained specific, often material details that had no relation to the user's input, and these reviews almost certainly would be false for the users who copied them and published them online," the FTC said. 

It's no surprise the commission would take action against AI-generated product reviews: The FTC proposed a rule that would ban them last year, and that rule goes into effect next month. 

For now, the startup was accused by the regulator of "violating the FTC Act by providing subscribers with the means to generate false and deceptive written content for consumer reviews."

Former FTC chief technologist Neil Chilson disagreed with the watchdog's decision in the Ryter case, telling The Register he was concerned it would harm innovation in the AI space. 

"The FTC offers no evidence that Rytr users actually did harm consumers," Chilson argued. "And it blames Rytr for hypothetical bad acts of users."

Chilson cited the two dissenting FTC commissioners, who he said agree with his take that the decision sets a precedent for punishing developers of new technology for misuse, even though the developer wasn't the cause of the harm. 

"The FTC undermines the credibility of its legitimate anti-fraud work by engaging in such unnecessary, wasteful, and illegal overreach," Chilson opined.

The agency argued in its complaint [PDF] against Rytr that the biz did have a pretty cut-and-dry "testimonial and review" use-case programmed right into it that allowed a user to choose the tone, add certain keywords, and export the copy - which in the watchdog's view would break the rules banning AI-generated product reviews. 

In addition, the FTC made note of a number of actual uses of Rytr to that end. 

"Records show that at least some of its subscribers have utilized the Rytr service to produce hundreds and in some cases thousands of reviews," the commission said. 

"One subscriber generated hundreds of reviews for, among numerous other services, specific garage door repair companies," the complaint continued, while another user "generated over 83,000 reviews for various specific packing and moving services." 

Chilson responded to that aspect of the FTC's complaint by saying the federal agency offered no actual evidence of those reviews, and that the approach taken in this case by the Feds is significant.

This ... could implicate any business that creates tools that users misuse

"The complaint blames Rytr for hypothetical bad acts of Rytr users. This is an attempt to break new legal ground that goes far beyond 'fake reviews' and could implicate any business that creates tools that users misuse," Chilson told us. 

Rytr wasn't fined under the terms of its settlement with the FTC, nor did it admit to any wrongdoing, but did agree to cease offering similar services. As of writing, the option to generate testimonials and reviews in Rytr is missing. Chilson said the startup had already eliminated the ability for customers to write reviews prior to the settlement, making the terms of the deal something Rytr couldn't help but accept.

"The FTC wanted it to be very easy for Rytr to accept this settlement; they didn't want a court to review this fringe theory," Chilson said. 

We've reached out to the FTC to learn more about its reasoning behind the Rytr lawsuit. Rytr had no comment. ®

More about

TIP US OFF

Send us news


Other stories you might like