Authors

  1. Pickler, Rita H.

Article Content

I know what I write in this editorial will be out-of-date by the time it is published. However, it is important to comment on the latest potential threat to scientific integrity-artificial intelligence (AI), now in the form of software that "writes" for you. As we know, there have long been unseen writers, also known as ghostwriters, for scientific papers (Gotzsche et al., 2009). Typically, ghostwriters were sophisticated technical writers whose job was to get research findings to print when the "busy" scientist could not. Many times, these writers, rarely acknowledged, worked for institutions or agencies that had funded the research. As mostly hidden contributors to scientific communication, neither the public nor scientists knew who these people were, what their scientific training was, and, importantly, what their ethical training was (Das & Das, 2014). We simply did not know they were there, writing papers.

 

Today, we have new scientific "writers" in the form of AI. Most notable among these at the time of this writing is ChatGPT, a language model trained by OpenAI (https://openai.com/) to create text based on input received from developers and users (Chatterjee & Dethlefs, 2023). Late in 2022, ChatGPT introduced a "researcher" version; it was an immediate hit.

 

There are many positive aspects to ChatGPT and other AI-based writing tools. These include the capability to process and analyze large amounts of data relatively quickly even though any product of such analysis will require researcher verification. AI tools can also adequately create outlines for writing, provide basic definitions for a wide range of processes and topics, and write statistical code for some programs with varying degrees of accuracy. Moreover, it is possible that AI tools may also make it easier to produce papers more congruent with English language standards; this will certainly be an advantage for authors who do not typically write in English. Even now, AI tools can simplify complex research language as, for example, on consent forms.

 

More worrisome of course, are the dangerous aspects of AI tools. As ChatGPT itself has "written," AI can contribute to scientific misinformation. For example, AI-generated "writing" comes from a large volume of text, images, and videos all available online but with unclear standards of scientific verification. Thus, the risk is high for "scientific papers" written in AI systems to contain false or misleading information; improperly validated input (or output) could lead to misinformation. So although AI algorithms can be used to make decisions based on large amounts of data, if the data used to train these algorithms are biased or contain misinformation, the decisions made by the AI system can also be biased or contain misinformation.

 

I recently heard a well-informed speaker refer to AI "authored" writing as persuasive but "stupid." In fact, much of what ChatGPT produces sounds quite convincing (yes, I have tested this statement). Only when you read carefully and seek corroboration from reliable sources, which ChatGPT does not provide, do you understand that what you have read is factually incorrect or, in the parlance of AI, a hallucination. Undoubtedly, OpenAI and other developers will work on improving output accuracy. However, the output can never be as accurate as that produced by an ethical scientist (Thorp, 2023).

 

That brings me to what I think is the most important part of this editorial-AI is not responsible for the spread of scientific misinformation. AI is a tool, and unethical persons, including unethical scientists, can use it to spread misinformation. Thus, we, the scientific community, need to proceed carefully and with rigorous attention to the use of AI-based tools; critical evaluation of information transmitted in scientific writing is required with increased attention paid on detection of AI writing.

 

At this time (mid-February 2023), a number of scientific publishers have noted they will not allow ChatGPT or similar software to be listed as an author (e.g., see Flanagin et al., 2023). That will be the case as well for Nursing Research. A machine is not an author according to recommendations of the International Committee of Medical Journal Editors (ICMJE: https://www.icmje.org/), to which the journal adheres. As a reminder, ICMJE recommends the following criteria for authorship:

 

1. substantial contributions to the conception or design of the work or the acquisition, analysis, or interpretation of data;

 

2. drafting the work or revising it critically for important intellectual content;

 

3. final approval of the version to be published; and

 

4. agreement to be accountable for all aspects of the work including confirming the accuracy or integrity of any part of the work.

 

 

An AI "author" cannot meet any of these criteria. However, that does not mean that human authors might not use AI software to do their thinking and writing. Of course, we can detect that deception although not perfectly. I am hopeful, in fact, that AI writing detection software will become more sophisticated over time, and as that happens, we will use that software to fullest advantage. At Nursing Research, we take seriously our responsibility to report scientific research findings that advance understanding of health. We can only do this if we are certain the findings come from the highest quality research studies conducted by scientists following the highest standards of integrity.

 

At Nursing Research, we currently have a call for papers on innovative research designed to counteract health misinformation. As noted in an earlier editorial on this topic (Pickler, 2022), misinformation can occur within the context of neglecting what we already know. The new call derives from an interest in encouraging papers about research to better understand how people are exposed to and are affected by misinformation and how misinformation may vary across populations. We are also interested in papers reporting on effective strategies to prevent and address health misinformation such as improved health communication, efficacy to make informed decisions about health and healthcare, and increased health literacy. That call, first issued on November 2022, noted that misinformation thrives in the absence of easily accessible, credible information; limited or contradictory information; and emotionally conveyed and amplified reports. We are concerned about misinformation for a number of reasons, of course, but most significantly, because misinformation separates us from the truth, creates situations of distrust, and, worst of all, can negatively affect the health of the public. At Nursing Research, we hope our authors are committed to truth in scientific reporting and will thus improve scientific inquiry through their thoughtful writing about ways to counteract or avoid health misinformation.

 

Innovations such as those provided by AI software are not going away. Rather, the pervasiveness of these tools will only increase. Our responsibility as scientists includes ensuring we use these tools to best advantage and for the greater good. We need to guide our trainees to use tools that have the potential to increase scientific productivity and progress; we should encourage our trainees' participation in the further development of the AI tools to ensure rigorous barriers to misinformation and other misuse. We just need to proceed with the understanding that there is nothing artificial about scientific integrity. In the end, human intelligence needs to ensure the highest ethical conduct and reporting of science.

 

ORCID ID

 

Rita H. Pickler https://orcid.org/0000-0001-9299-5583

 

REFERENCES

 

Chatterjee J., Dethlefs N. (2023). This new conversational AI model can be your friend, philosopher, and guide...and even your worst enemy. Patterns (New York, N.Y.), 4, 100676. [Context Link]

 

Das N., Das S. (2014). Hiring a professional medical writer: Is it equivalent to ghostwriting? Biochemia Medica, 24, 19-24. [Context Link]

 

Flanagin A., Bibbins-Domingo K., Berkwits M., Christiansen S. L. (2023). Nonhuman "authors" and implications for the integrity of scientific publication and medical knowledge. JAMA. Advance online publication. 10.1001/jama.2023.1344 [Context Link]

 

Gotzsche P. C., Kassirer J. P., Woolley K. L., Wager E., Jacobs A., Gertel A., Hamilton C. (2009). What should be done to tackle ghostwriting in the medical literature? PLoS Medicine, 6, e23. [Context Link]

 

Pickler R. H. (2022). Knowledge neglect. Nursing Research, 71, 419-420. [Context Link]

 

Thorp H. H. (2023). ChatGPT is fun, but not an author. Science (New York, N.Y.), 379, 313. [Context Link]