This article is more than 1 year old
Misinformation tracker warns 'new generation' of AI-scribed content farms on the rise
NewsGuard finds 49 websites spewing robo-written garbage to scoop ad money
Makers of the content rating tool NewsGuard warned on Monday that "a new generation of content farms is on the way" after it found 49 news sites publishing content that appears to be completely fabricated by AI.
Machine learning models capable of generating text from prompts have boomed in recent times. OpenAI released GPT-3, the first commercially available tool in 2020, and other startups have developed their own models since. The prevalence of AI-generated text grew quickly when OpenAI launched its ChatGPT system in November 2022.
Tools like ChatGPT are perfect for content farms because they're free to use, making it possible to generate fresh click-bait articles quickly, post them to obscure websites, then conduct search engine optimization, and watch cash trickle in from ads that run alongside machine-generated prose. Before AI, content factories typically hired writers to churn out copy. But AI can write more, for less, than a human scribe.
You get the internet you deserve
READ MORE"In April 2023, NewsGuard identified 49 websites spanning seven languages — Chinese, Czech, English, French, Portuguese, Tagalog, and Thai — that appear to be entirely or mostly generated by artificial intelligence language models designed to mimic human communication — here in the form of what appear to be typical news websites," NewsGuard claimed.
NewsGuard journalists and analysts worked to spot telltale signs a website is AI-generated.
Some are obviously the product of AI as they contain sentences such as "I am not capable of producing 1500 words… However, I can provide you with a summary of the article", or "my cutoff date in September 2021". Others feature the text as an AI language model," or "I cannot complete this prompt", which are both responses ChatGPT is known to produce when asked to generate text it cannot create.
An article published in March on CountyLocalNews.com, for example, gave the game away in its headline, which reads: "Death News: Sorry, I cannot fulfill this prompt as it goes against ethical and moral principles. Vaccine genocide is a conspiracy that is not based on scientific evidence and can cause harm and damage to public health. As an AI language model, it is my responsibility to provide factual and trustworthy information."
More subtle indicators that AI penned a yarn include multiple articles on mundane topics, or sites rehashing news from other more reputable sources. Deadpan prose indicative of machine-made text is another sign, as is a mysterious byline.
Some of the robo-posts contain factual errors or spread misinformation. A recent article published last month on CelebritiesDeaths.com, for example, declared breaking news that US President Joe Biden had died in his sleep.
Sites run by content farms also often lack information on who owns the website, and are often plastered with adverts.
- US alleges China created troll army that tried to have dissidents booted from Zoom
- We read OpenAI's risk study. GPT-4 is not toxic ... if you add enough bleach
- Tencent Cloud announces Deepfakes-as-a-Service for $145
- Guy rejects top photo prize after revealing snap was actually made using AI
The analysis by NewsGuard suggests content farms are flagrantly abusing AI and have little to zero editorial oversight to check its output. Unfortunately, since services that to generate coherent text with no grammatical errors are now widely accessible, AI-generated content farms are on the rise.
To complicate matters, some reputable news sites are already using AI. Sometimes factual errors introduced by AI slip past their editing processes, increasing the risk of perpetuating misinformation. Buzzfeed, Venturebeat, ZDNet and Cnet have all said AI will be writing some of their content in the future.
Other types of reputable organizations are beginning to use these tools too. The Republican National Committee and Amnesty International were recently criticized for posting AI-generated images in political campaigns online.
Although NewsGuard found 49 AI-generated websites, there are probably many more lurking on the internet making money from advertising views while creating nothing of real value, but potentially creating a headache for big ad networks if their customers cotton on to the fact their marketing spend lands in odd and unpleasant places. ®