Authors

  1. Lim, Fidelindo

Article Content

The launching of OpenAI's ChatGPT (generative pretrained transformer) in November of 2022 has sent shivers down the spines of educators. Chatbots are just the newest addition to a long line of innovations transforming education. The general anxiety among faculty in higher education stems from the perceived notion that machine-generated writing will effectively kill writing as we know it or make it obsolete (Frith, 2023). How chatbots might impact nursing education is not yet fully understood, but there is reason to believe that courses with writing assignments (e.g., nursing care plans) are at risk for moral panic if the faculty's worst fears come true - that is, with students gaming the system by passing auto-generated writing as their own. What is on trial is not the technology, but academic and personal integrity.

  
Figure. No caption a... - Click to enlarge in new windowFigure. No caption available.

My curiosity prompted me to test the digital wisdom of ChatGPT by asking it to write about the future of nursing education. Instantaneously, it produced the following paragraph:

 

The future of nursing education is likely to be shaped by a number of trends and developments. Some of the key changes that may impact nursing education in the coming years include the increasing use of technology, the growing demand for advanced practice nurses, and the changing health care landscape. One of the most significant trends that is likely to shape the future of nursing education is the increasing use of technology.

 

At first glance, we can't help but say that the paragraph is not bad. It calls to mind something one might read in the ubiquitous forums across all levels of education. However, a closer reading gives the impression that the writing lacks depth and analysis. Although plausible sounding, the text sounds prosaic, sophomoric, and devoid of a distinct personal voice. It speaks, but it does not sing. The challenge for faculty is how to tell if the written text is plagiarized (in part or in full) using artificial intelligence (AI). Here, I offer my reflections on the impact of generative AI in nursing education.

 

AMNESIA, FANTASIA, AND INERTIA

In 1990, the education scholar Lee Shulman wrote Taking Learning Seriously - now considered a classic erudition on the professoriate - and introduced what he called "the pathologies of learning": amnesia, fantasia, and inertia (Shulman, 1999). In brief, barriers to meaningful learning involve malfunctions of memory, understanding, and application. The use of generative AI potentially exacerbates these pathologies.

 

Amnesia is the most ubiquitous of learning pathologies for Shulman. Students frequently forget what they have learned in class and from reading, and information recall diminishes exponentially with time. The passive nature of using ChatGPT does little to enhance recall of material because one remembers better what one created. Furthermore, chatbots do not transform information into knowledge. Only thoughtful engagement with the subject of study can truly lead to meaningful learning. From a pedagogical perspective, faculty can design active learning strategies beyond writing assignments. The curriculum should have a built-in iterative process connecting key concepts. During lectures, faculty can optimize the use of the Socratic method, using prompt questions that require the sharing of personal experiences, unaided by machine learning.

 

Fantasia refers to illusory understanding or lingering misconceptions. According to Shulman, students may feel confident that they understand something that they don't. Using ChatGPT gives the illusion of having done "research." Using AI software to avoid actual writing erodes accountability (O'Connor, 2023). On paper, students look like they know the topic but must fake it when asked to synthesize what is really understood. In health care professions, this can have potentially life-threatening consequences. Because knowledge and critical thinking competencies are cumulative, students who faked their way in prerequisite content will have difficulty learning new material (Weimer, 2018).

 

Inertia is the inability to apply the knowledge learned. For example, students can articulate the risk factors of pressure injuries but may not know how to turn the patient or remember to change the patient's position. My fear is that students who rely on generative AI to answer reflective questions that seek to make sense of the human condition, to cultivate empathy, will be missing out. Personal narrative, not machine thinking, best informs the formation of empathy (Benner et al., 2010). My hope is that ChatGPT will be like a coach that can help students and faculty translate jargon into actionable insights.

 

One of nursing education's priorities is to create a cadre of new nurses who can make sound clinical judgments - a higher order competency that cannot be masterly applied using workarounds such as chatbots. The range and depth of our ability to form, refine, and test hypotheses in applying science-based solutions to complex problems is best achieved by doing and, through careful study, using intuitive and humanistic, not robotic, approaches. Higher order thinking means asking the types of questions chatbots have no ready-made answer for.

 

THE WRITING IS ON THE WALL

Large-scale language models such as ChatGPT continue to raise issues, even before students are admitted to nursing school. ChatGPT can generate college admissions essays, personal statements, and scholarship applications. It might even be used to take admissions exams. The onus lies on the admissions office to detect errors and fabrications. Holistic admissions policies for nursing program applicants should look beyond the college essay.

 

For faculty, policing for AI-assisted plagiarism adds another layer of burden on an already high workload. Teachers of online courses that heavily grade forum entries may need to reconsider their methods for evaluating student learning. Schools have to update their policies on academic integrity and amend plagiarism definitions to include generative AI. Now, more than ever, accurate citation and attribution are essential.

 

It is incumbent on academic leaders to include students in the ongoing discussion of thorny issues associated with AI. There is concern, for example, that students with English as an additional language might be more likely to be suspected of using ChatGPT. The optimistic view is that if the school cultivates a climate where students are seen as co-creators of intellectual commitment and are invested in learning, they will be less likely to reach for a workaround and cheat themselves of the pride gained from honest work.

 

Writing is one of the highest forms of cognitive exercise, a jewel in the crown of the scholarship of teaching. The overemphasis on publication output for faculty could prompt some to use ChatGPT to access nonhuman "authors" to produce papers (Flanagin et al., 2023). Because of the potential for misuse of chatbots, publishing companies may require more stringent rules and thus discourage potential authors. Peer reviewers might become disillusioned in providing thoughtful critique when they are not sure of the paper's authenticity and true authorship. On the other hand, AI tools can also be used to improve the manuscript's language and screen for plagiarism.

 

There are promising uses of chatbots in academia. They can be used for student tutoring (O'Connor, 2023), as a prompt to stimulate dialogue and as a memory aide. However, in their current incarnation, they are not considered to be a trusted source of accurate information (Flanagin et al., 2023). Time will come when ChatGPT and its ilk will become part of academic writing, the way Google has become second nature to fact finding or fact checking. Until then, it is essential to continue an intelligent conversation with all stakeholders on how AI tools mutually impact teaching and learning. Faculty development programs on digital literacy and navigating the growing impact of AI in academia is essential. A multidisciplinary task force should be convened to provide guidance on meaningful use of AI.

 

The National League for Nursing's (NLN) core competencies of nurse educators inspire faculty to engage in scholarship. When Boyer redefined scholarship, he admonished the teacher-scholar to reflect on how knowledge can be applied to consequential problems. Currently, AI-generated texts are stoking fears of the unknown with far-reaching ramifications in academia. There is an expectation that educators will pay attention and be intellectually engaged in forecasting solutions to identified AI-related teaching-learning issues.

 

Faculty efforts aligned with the scholarship of teaching will, hopefully, generate new knowledge in understanding the burgeoning AI-induced conundrum in academia. Using the NLN Commission for Nursing Education Accreditation standards for nursing education programs as a guide, the curriculum needs to balance academic (e.g., writing papers) and practical outcomes (e.g., competency to safely manage a patient in real time). A culture of excellence using pedagogical technologies grounded in integrity and accountability will enable faculty and students to achieve their level best competencies.

 

Given that teaching is so deeply rooted with human bonds, I am biased to say that humans, not machines, are still responsible for educating future nurses. It is our duty to ensure that we use AI as another tool at our disposal - not the other way around. The concluding sentence ChatGPT provided to my "What is the future of nursing education?" prompt was, "These changes will present both challenges and opportunities for nursing students and educators, and will require a flexible and adaptable approach to learning and teaching." I couldn't agree more. ChatGPT can type common sense. It is up to us humans to make meaning and to make it actionable.

 

REFERENCES

 

Benner P., Sutphen M., Leonard V., Day L. (2010). Educating nurses: A call for radical transformation. Jossey-Bass. [Context Link]

 

Flanagin A., Bibbins-Domingo K., Berkwits M., Christiansen S. L. (2023). Nonhuman "authors" and implications for the integrity of scientific publication and medical knowledge. JAMA, 329(8), 637-639. https://doi.org/10.1001/jama.2023.1344[Context Link]

 

Frith K. H. (2023). ChatGPT: Disruptive educational technology. Nursing Education Perspectives, 44(3), 198-199. https://doi.org/10.1097/01.NEP.0000000000001129[Context Link]

 

O'Connor S. (2023). Corrigendum to "Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse?" [Nurse Educ. Pract. 66 (2023) 103537]. Nurse Education in Practice, 67, 103572. https://doi.org/10.1016/j.nepr.2023.103572[Context Link]

 

Shulman L. S. (1999). Taking learning seriously. Change: The Magazine of Higher Learning, 31, 10-17. [Context Link]

 

Weimer M. (2018). A memo to students on cheating. Faculty Focus. https://www.facultyfocus.com/articles/effective-classroom-management/memo-studen[Context Link]