Some scientists can't stop using AI to write research papers

If you read about 'meticulous commendable intricacy' there's a chance a boffin had help

Linguistic and statistical analyses of scientific articles suggest that generative AI may have been used to write an increasing amount of scientific literature.

Two academic papers assert that analyzing word choice in the corpus of science publications reveals an increasing usage of AI for writing research papers. One study, published in March by Andrew Gray of University College London in the UK, suggests at least one percent – 60,000 or more – of all papers published in 2023 were written at least partially by AI.

A second paper published in April by a Stanford University team in the US claims this figure might range between 6.3 and 17.5 percent, depending on the topic.

Both papers looked for certain words that large language models (LLMs) use habitually, such as “intricate,” “pivotal,” and “meticulously." By tracking the use of those words across scientific literature, and comparing this to words that aren't particularly favored by AI, the two studies say they can detect an increasing reliance on machine learning within the scientific publishing community.

In Gray's paper, the use of control words like "red," "conclusion," and "after" changed by a few percent from 2019 to 2023. The same was true of other certain adjectives and adverbs until 2023 (termed the post-LLM year by Gray).

In that year use of the words "meticulous," "commendable," and "intricate," rose by 59, 83, and 117 percent respectively, while their prevalence in scientific literature hardly changed between 2019 and 2022. The word with the single biggest increase in prevalence post-2022 was “meticulously”, up 137 percent.

The Stanford paper found similar phenomena, demonstrating a sudden increase for the words "realm," "showcasing," "intricate," and "pivotal." The former two were used about 80 percent more often than in 2021 and 2022, while the latter two were used around 120 and almost 160 percent more frequently respectively.

The researchers also considered word usage statistics in various scientific disciplines. Computer science and electrical engineering were ahead of the pack when it came to using AI-preferred language, while mathematics, physics, and papers published by the journal Nature, only saw increases of between five and 7.5 percent.

The Stanford bods also noted that authors posting more preprints, working in more crowded fields, and writing shorter papers seem to use AI more frequently. Their paper suggests that a general lack of time and a need to write as much as possible encourages the use of LLMs, which can help increase output.

Potentially the next big controversy in the scientific community

Using AI to help in the research process isn't anything new, and lots of boffins are open about utilizing AI to tweak experiments to achieve better results. However, using AI to actually write abstracts and other chunks of papers is very different, because the general expectation is that scientific articles are written by actual humans, not robots, and at least a couple of publishers consider using LLMs to write papers to be scientific misconduct.

Using AI models can be very risky as they often produce inaccurate text, the very thing scientific literature is not supposed to do. AI models can even fabricate quotations and citations, an occurrence that infamously got two New York attorneys in trouble for citing cases ChatGPT had dreamed up.

"Authors who are using LLM-generated text must be pressured to disclose this or to think twice about whether doing so is appropriate in the first place, as a matter of basic research integrity," University College London’s Gray opined.

The Stanford researchers also raised similar concerns, writing that use of generative AI in scientific literature could create "risks to the security and independence of scientific practice." ®

Send us news

IMF suggests tax on AI's CO<sub>2</sub> emissions, but not AI itself

Be afraid … be very afraid: AI could also revolutionize tax itself, money boffins argue

To solve AI's energy crisis, 'rethink the entire stack from electrons to algorithms,' says Stanford prof

Think biologically not digitally to go from megawatts to watts, HAI gathering told

Payoff from AI projects is 'dismal', biz leaders complain

No wonder most orgs are slowing their spending

By 2030, software developers will be using AI to cut their workload 'in half'

Prepare for the HyperAssistant of the future, maybe

HPE, Nvidia team up to offer 'turnkey' private cloud racks for keeping AI loads local

Everyone wants to be best buddies with Jensen Huang's GPU juggernaut

AI smartphones must balance promise against hype and privacy concerns

Color us shocked: 66% of Apple users said they wouldn't switch for any reason

US senators propose guardrails for government AI purchases and operations

Bill proposes appointment of chief AI officers, privacy safeguards, and lots of testing

Italian premier taps Pope Francis to warn G7 of AI disaster if ethics ignored

Holy smokes, don't screw this up

DuckDuckGo AI Chat promises privacy for bot conversations

There's also an off switch

Google DeepMind's latest model promises realistic audio for your AI-generated vids

Launch comes as Runway, Pika, Kling push the boundaries of machine-imagined video

Can platform-wide AI ever fit into enterprise security?

You know what they say about headlines that end in a question mark

HPE Q2 sales rise was brought to you by the letters A and I

Shipments of servers used for artificial intelligence jump more than 100% to $900M, share price up double digits