Article Content

Fraud in the conduct and publication of scientific research isn't new. But rapid advances in artificial intelligence systems, especially generative AI language programming such as ChatGPT (OpenAI, San Francisco, CA; GPT stands for Generative Pretrained Transformer), make it much easier to commit widespread fraud in ways that are ever more difficult for employers, editors, reviewers, and publishers to detect and reject. Consequently, it becomes increasingly challenging to ensure that papers published in scientific and professional journals are indeed genuine. The stakes are very high for healthcare journals because clinical decisions will be made based on the "evidence" disseminated to the field.

 

Consider the rather shocking example of Spanish chemical scientist Professor Rafael Luque, one of the world's most highly cited research authors.1 He was sanctioned and suspended without pay for 13 years from the University of Cordoba when it was discovered that other universities, seeking to improve their ranking in the world, had paid to have him add their institution's name to his list of affiliated institutions in exchange for little to no actual work. Even more appalling, at age 44 he has published (well, his name is listed on) over 950 papers; between January and April of this year he published 58 articles, one every 37 hours! How could this be? Apparently, he has also added his name to articles written by others, including co-authorship on a paper written by a student using data stolen by that student from another lab. At least one of Luque's papers was found to have been AI generated; it had previously been offered for sale online. Luque admits that he uses ChatGPT to "polish" his manuscript texts. While this case is admittedly egregious, it highlights several kinds of publication fraud (all of which may be worsened using AI systems), and shows that it is quite possible to fool employers, editors, reviewers and publishers for years without being caught.

 

Plagiarism is an age-old fraud, and in fact AI systems have been developed for positive purposes to detect and flag text in submitted manuscripts that appears to have been previously published. Through the Editorial Manager submission system, we use one such program, iThenticate (Turnitin, Oakland, CA). Enterprising fraudsters have found ways around these plagiarism detectors, however, by rephrasing text or inserting some new phrasing into the existing text. And now, for those who don't want to do their own cheating, there are AI programs that will write the rephrased version for you! In response, AI plagiarism detection programs are being modified to detect odd and automated phrasing. It's a "cat and mouse" game between authors committing fraud and publication professionals trying to catch and counter them.

 

A primary concern now is how to detect text that was generated by an AI system such as ChatGPT that has not been previously published. The use of AI generated language in published research is increasing, with the proportion of published papers having a "high probability" (>90%) of AI generated language rising from 21.7% to 36.7% between 2020 and 2023.2 All of these papers were written before the launch of ChatGPT. Humans are not especially good at detecting AI generated languge given the current (imperfect) quality of AI text,2-4 and this quality is expected to improve rapidly as more people interact with these systems which then "learn" from these interactions.3-5 As AI generated text becomes more sophisticated, the ability of human readers to distinguish human vs AI generated text will decline. We will need AI detection tools to identify AI generated text. Some AI systems originally developed to detect plagiarism are being further modified to detect AI generated text (eg, Turnitin/iThenticate), but their "diagnostic accuracy" is not yet well established. For this reason some journals have banned AI generated text altogether, while most others discourage it and require at a minimum explicit transparency about the use of AI.

 

Several respected organizations dedicated to ethics and integrity in publishing have recently developed guidelines regarding the use of AI in scholarly manuscripts and publishing6-8; these guidelines are being updated frequently as the opportunities and threats of rapidly changing AI technologies must be addressed. Key points include:

 

* AI systems cannot be listed as an author or co-author, nor can they be cited as an author or information source.

 

* Transparency is paramount; authors must disclose (in the Methods or Acknowledgement section) the use of AI systems when used for (1) generation or paraphrasing of text and production of images/graphs, and (2) the creation, collection and analysis of data.

 

* This disclosure should include how AI was used, and specifically which AI tool, and which version, was used. This is very similar to the current method of reporting of statistical analyses.

 

* No disclosure is required for literature searching, word-processing, spelling and grammar correction, language translation, reference management, etc.

 

* Works created by AI systems cannot be copyrighted.

 

* Editors and reviewers should never upload any manuscript submission into an AI system.

 

* Reviewers must disclose any use of AI tools used in the writing of a peer-review.

 

* Editors must disclose if any AI systems are used in decision-making; current guidelines are clear that any and all consequential decisions should involve human oversight and accountability.

 

 

Of course, in the scholarly publication world, AI tools can be used for good as well. Authors submitting their manuscript to the JGPT can, for free, check to see if their paper is properly formatted using Paperpal (Cactus Communications Services, Singapore). For a small optional fee, Paperpal will also copyedit the paper, without changing any meaningful content. Many writers use AI programs such as Grammarly to improve their written communication. None of this is unethical or fraudulent.

 

Readers may expect that the upcoming revision to the Instructions for Authors will contain guidance for authors regarding the ethical use of AI in the conduct and publication of their work.

 

Readers may also wish to see:

 

Anderson N, Belavy DL, Perle SM, et al. AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in Sports & Exercise Medicine manuscript generation. BMJ Open Sport Exerc Med. 2023;9(1):e001568. doi:10.1136/bmjsem-2023-001568

 

Faisal R. Elali, Leena N. Rachid. AI-generated research paper fabrication and plagiarism in the scientific community, Patterns. 2023;4(3):100706. https://doi.org/10.1016/j.patter.2023.100706.

 

Hosseini M, Rasmussen LM, Resnik DB. Using AI to write scholarly publications. Account Res. 2023;1-9. doi:10.1080/08989621.2023.2168535

 

Levene A. COPE. Editorial: Artificial intelligence and authorship February 23, 2023. Accessed May 25, 2023. https://publicationethics.org/news/artificial-intelligence-and-authorship

 

Majovsky M, Cerny M, Kasal M, Komarc M, Netuka D. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora's Box has been opened. J Med Internet Res 2023;25:e46924. doi:10.2196/46924

 

Nature. Editorial. Tools such as ChatGPT threaten transparent science: here are our ground rules for their use. Nature. 613, 612 (2023). January 24, 2023. doi.org/10.1038/d41586-023-00191-1

 

Sun, DW. Urgent need for ethical policies to prevent the proliferation of AI-generated texts in scientific papers. Bioprocess Technol. 2023;16:941-943. https://doi.org/10.1007/s11947-023-03046-9

 

Watson R, Stiglic G. COPE. Guest editorial: The challenge of AI chatbots for journal editors. February 23, 2023. Accessed May 31, 2023. https://publicationethics.org/news/challenge-ai-chatbots-journal-editors

 

 

EDITOR'S NOTE:

"Honey, I shrunk the journal!"

Readers of the JGPT will of course notice that this issue (46(3)) and the following issue (46(4)) in 2023 have fewer print pages and thus fewer articles than usual. This change temporary, but intentional. The reduced number of print pages in this latter half of the year is occurring to make up for an unusually large first issue (46(1)), which had nearly 90 print pages instead of the usual 50+ pages, and thus included more articles than is typical. In the end, the annual number of print pages and articles will not be any different than in prior years. The Editor thanks the readership for their patience while the print page numbers "even out" over the third and fourth issues of 2023. The regular, "right-sized" journal will be back in your hands in 2024.

 

REFERENCES

 

1. Ansede M. One of the world's most cited scientists, Rafael Luque, suspended without pay for 13 years. El Pais. April 4, 2023. Accessed May 23, 2023. https://english.elpais.com/science-tech/2023-04-02/one-of-the-worlds-most-cited-[Context Link]

 

2. Miller LE, Bhattacharyya D, Miller VM, et al Recent trend in artificial intelligence-assisted biomedical publishing: A quantitative bibliometric analysis. Cureus. May 19, 2023;15(5):e39224. doi:10.7759/cureus.392243 [Context Link]

 

3. Jakesch M, Hancock JT, Naaman M. Human heuristics for AI-generated language are flawed. Proc Natl Acad Sci.120(11):e2208839120. doi:10.1073/pnas.2208839120 [Context Link]

 

4. Sadasivan VS, Kumar A, Balasbramanian S, Wang W, Feizi S. Can AI-generated text be reliably detected? arXiv. 2023. DOI:10.48550/arxiv.2303.11156 [Context Link]

 

5. Chakraborty S, Bedi AS, Zhu S, An B, Manocha D, Huang F. On the possibilities of AI-generated text detection, arXiv. 2023. DOI:10.48550/arxiv.2304.04736 [Context Link]

 

6. Committee on Publication Ethics (COPE). Position statement: Authorship and AI tools. February 13, 2023. Accessed May 25, 2023. https://publicationethics.org/cope-position-statements/ai-authorFeb2023[Context Link]

 

7. International Committee of Medical Journal Editors (ICMJE). Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Updated May, 2023. Accessed May 25, 2023. https://www.icmje.org/icmje-recommendations.pdf[Context Link]

 

8. Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. Nonhuman "authors" and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329(8):637-639. doi:10.1001/jama.2023.1344 [Context Link]