06/04/2024 / By Lance D Johnson
John Wiley and Sons, a major academic publisher, is currently retracting more than 11,300 “peer-reviewed” science papers that they had previously published. These papers were once regarded as cutting-edge science and were cited numerous times by other academic researchers. Now these scientific papers – which often relied on taxpayer dollars for research funding – are being revealed as fraudulent.
Additionally, the 217-year-old publisher announced the closure of 19 journals due to large-scale research fraud. The fake papers often contained nonsensical phrases generated by AI to avoid plagiarism detection. Examples include “breast cancer” referred to “bosom peril” and “fluid dynamics” written as “gooey stream.” In one paper, “artificial intelligence” was called “counterfeit consciousness.”
These systemic issues of fraud have significantly damaged the legitimacy of scientific research and damaged the integrity of scientific journals. The academic publishing industry, valued at nearly $30 billion, faces a credibility crisis.
Scientists worldwide face immense pressure to publish, as their careers are often defined by the prestige of their peer-reviewed publications. Researchers are clamoring for funds and cutting corners by using irrelevant references and generative AI. For example, a scientific paper is supposed to include citations that acknowledge the original research that informed the current paper. However, some papers feature a list of irrelevant references to make the paper look legitimate. In many cases, researchers rely on AI to generate these citation references, but many of these references either don’t exist or don’t apply to the paper at hand. In one cluster of retracted studies, nearly identical contact emails were registered to a university in China, although few, if any, of the authors were based there.
Generative AI is also used to disguise plagiarism throughout new scientific papers. Many of these fraudulent papers contain technical-sounding passages that were generated by AI. These passages are inserted midway through the paper so that the peer review process cannot detect the shenanigans. These tortured AI phrases replace real terms from original research, so that the plagiarism is not detected by screening tools.
Guillaume Cabanac, a computer science researcher at Université Toulouse III-Paul Sabatier in France, developed a tool to identify such issues. The tool is called “Problematic Paper Screener.” It scans a vast body of published literature — about 130 million papers — for various red flags, including “tortured phrases.” Cabanac and his colleagues discovered that researchers who attempt to evade plagiarism detectors often replace key scientific terms with synonyms generated by automatic text generators.
“Generative AI has just handed them a winning lottery ticket,” said Eggleton of IOP Publishing. “They can produce these papers cheaply and at scale, while detection methods have not yet caught up. I can only see this challenge growing.”
In fact, a researcher at University College London recently found that approximately one per cent of all scientific articles published last year, some 60,000 papers, were written by a computer. In some sectors, this equates to 1 in 5 papers written by a computer.
For example, a recent paper published in Surfaces and Interfaces in the journal Elsevier, contained the line: “certainly, here is a possible introduction for your topic.” Researchers are using AI chat bots and large language models (LLMs) without even looking at what is being transcribed for publication. A quick edit would find that this phrase is written by a computer. Researchers, peer reviewers and publishers are missing basic misnomers generated by AI – so what else about these research papers is either fabricated, plagiarized or made up by a computer?
Scientific integrity consultant Elisabeth Bik said LLMs are designed to generate text, but they are incapable of producing factually accurate ideas. “The problem is that these tools are not good enough yet to trust,” Bik says, pointing to a term called “hallucination.” Bik said the LLMs “make stuff up.”
Blind trust in AI is damaging the integrity of science papers. It takes strong reasoning and discernment skills to navigate the untrustworthy, misconstruing and bloviating nature of AI large language models.
Sources include:
Tagged Under:
academic publishing, automatic text generators, bloviating AI, computer generated science, credibility crisis, deception, discernment, faked, fraud, generative AI, Glitch, hallucinating AI, large language models, peer review, research, science deception, science fraud, scientific, scientific integrity, tortured phrases, Wiley
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 RESEARCH NEWS