Publisher breaks news by using bots to write inaccurate stories

Worse still, human editors appear not to have caught the mistakes

Consumer tech outlet CNET is reviewing all articles it published that were written with the help of AI, after it was found some contained incorrect information.

The masthead quietly began using a text-generation tool to write stories for its money section in November 2022. The articles credit "CNET Money Staff", but readers were not told that byline refers to an artificial author whose work was edited by humans until they hovered over the byline's text. 

Editor in Chief Connie Guglielmo said the site now attributes the stories to "CNET Money". A total of 78 articles so far contain text generated by AI and discuss personal finance-related topics such as credit scores, and home equity loans.

Guglielmo said CNET experimented with AI software "to see if the tech can help our busy staff of reporters and editors with their job to cover topics from a 360-degree perspective."

"Will this AI engine efficiently assist them in using publicly available facts to create the most helpful content so our audience can make better decisions? Will this enable them to create even more deeply researched stories, analyses, features, testing and advice work we're known for?," she asked.

The answer appears to have been "No."

Stories attributed to CNET Money contain errors. An article on compound interest, for example, is wrong. "If you deposit $10,000 into a savings account that earns 3 [per cent] interest compounding annually, you'll earn $10,300 at the end of the first year," claims one story.

That's not quite right, you'll earn $300 with 3 per cent interest not $10,300. Mistakes in other articles show AI doesn't quite understand how mortgages and loans are paid off. 

CNET will now review all of its AI-written copy to rewrite any false information generated by software. "We are actively reviewing all our AI-assisted pieces to make sure no further inaccuracies made it through the editing process, as humans make mistakes, too," a spokesperson told Futurism. "We will continue to issue any necessary corrections according to CNET's correction policy." The Register has asked CNET for comment.

Large language models powering AI text generators, like the latest ChatGPT system, are fundamentally flawed; they may produce readable and grammatically correct sentences, but can't judge if their data is accurate or assess the veracity of their output. CNET's experiment with this technology hasn't revealed anything new: Current AI systems are flawed, but humans will use them anyway. Trust them at your own peril. ®

Similar topics

TIP US OFF

Send us news


Other stories you might like