Researchers are increasingly using generative AI tools like ChatGPT in academic publishing for tasks such as manuscript preparation and review. Many publishers now have guidelines requiring…
You can also search for this author in PubMed Google Scholar You have full access to this article via your institution. “As of my last knowledge update”, “regenerate response”, “as an AI language model” — these are just a few of the telltale signs of researchers’ use of artificial intelligence (AI) that science-integrity watchers have found sprinkled through papers in the scholarly literature.
Three ways ChatGPT helps me in my academic writing Generative AI tools such as ChatGPT have quickly transformed academic publishing. Scientists are increasingly using them to prepare and review manuscripts, and publishers have scrambled to create guidelines for their ethical use. Although policies vary, many publishers require authors to disclose the use of AI in the preparation of scientific papers.
But science sleuths have identified hundreds of cases in which AI tools seem to have been used without disclosure. In some cases, the papers have been silently corrected — the hallmark AI phrases removed without acknowledgement. This type of quiet change is a potential threat to scientific integrity, say some researchers.
Such changes have appeared in a “small minority of journals”, says Alex Glynn, a research literacy and communications instructor at the University of Louisville in Kentucky. But given that there are probably also many cases in which authors have used AI without leaving obvious signs, “I am surprised by how much there is”, he adds.
Since 2023, integrity specialists have flagged papers with obvious signs of undisclosed AI use, such as those that contain the phrase “regenerate response”, generated by some chatbots based on large language models when a user wants a new answer to a query. Such phrases can appear in articles when an author copies and pastes a chatbot’s responses.
Scientific sleuths spot dishonest ChatGPT use in papers One of the first cases that Glynn recalls seeing was in a now-retracted paper published in 2024 in Radiology Case Reports1 that contained the chatbot phrase “I am an AI language model”. “It was as blatant as it could possibly be,” Glynn says.
“Somehow this passed not only the authors’ eyes, but the editors, reviewers, typesetters and everyone else who was involved in the production process.” Glynn has since found hundreds more papers with hallmarks of AI use — including some containing subtler signs, such as the words, “Certainly, here are”, another phrase typical of AI chatbots.
He created an online tracker, Academ-AI, to log these cases — and has more than 700 papers listed. In an analysis of the first 500 papers flagged, released as a preprint in November2, Glynn found that 13% of these articles appeared in journals belonging to large publishers, such as Elsevier, Springer Nature and MDPI.
Artur Strzelecki, a researcher at the University of Economics in Katowice, Poland, has also gathered examples of undisclosed AI use in papers, focusing on reputable journals. In a study published in December, he identified 64 papers that were published in journals categorized by the Scopus academic database as being in the top quartile for their field3.
“These are places where we’d expect good work from editors and decent reviews,” Strzelecki says. ‘Obviously ChatGPT’ — how reviewers accused me of scientific fraud Nature’s news team contacted several publishers whose papers had been flagged by Glynn and Strzelecki, including Springer Nature, Taylor & Francis and IEEE.