Keywords

Experiential Learning, Nursing Education, Nursing Students, Patient Simulations, Simulationist, Teaching Methods

 

Authors

  1. de Rosa, Cristina
  2. Frost, Erica
  3. Ziegler, Erin
  4. Spies, Martha

Abstract

Abstract: Co-facilitation (combining presence and expertise of clinical faculty and simulationists during all stages of simulation) presents an opportunity to improve student perceptions of effectiveness. Using a retrospective before and after comparison, data on students' perceptions were collected from baccalaureate nursing students in clinical courses after each simulation experience. Mean differences in Simulation Effectiveness Tool-Modified scores for pre- and post-implementation were compared, as well as scores between levels of students. Statistically significant improvements in student-rated simulation effectiveness were found with co-facilitation. The authors recommend future studies expanding this methodology and considering co-facilitation where feasible.

 

Article Content

Simulation in nursing education is no longer considered novel. However, despite much research into best practices, nursing programs continue to use inconsistent approaches when delivering nursing simulation (Fey & Jenkins, 2015). Simulation experiences are evaluated by several metrics, including student perceptions of effectiveness. Inconsistent training for faculty, infrequent simulation experiences for faculty, and high turnover of adjunct faculty can contribute to low student perceptions of simulation effectiveness. Furthermore, educators attempting to balance resources and ensure quality simulation experiences must consider costs, staffing, physical space, and training requirements.

 

Kolb's experiential learning theory (ELT) emphasizes that learners need to conceptualize their new knowledge and experiment with new behaviors, making this theory especially suited for simulation (Lee et al., 2020). Evidence supports the use of simulation facilitated by trained, dedicated educators (Hayden et al., 2014; INACSL Standards Committee, 2016). These educators guide students to reflect on their performance and make connections to future clinical applications while following best practices established by ELT to ensure that experiential learning occurs. Students perceive simulation to be effective when facilitated by trained faculty (Forneris et al., 2015), but many programs face complex challenges of managing resources and lack this capacity. Regulatory guidelines may require approved faculty to conduct simulation experiences and evaluate student performance.

 

Collaboration during simulation between simulationists and clinicians can enhance learning experiences and provide real-world insight. Facilitation is a "continuous process in which a designated person conducts and guides the simulation experience from beginning to end" (Moulton et al., 2017, p. 137). For this study, co-facilitation was defined as a faculty member and a simulationist facilitating the simulation experience together, from prebriefing through debriefing, combining clinical and simulation expertise to promote discussion, provide consistency, and enhance learning. In traditional models of simulation facilitation, course faculty prebrief and debrief the experience, and the simulationist and faculty jointly run the simulation scenario.

 

A gap in the literature exists surrounding simulation delivery using a co-facilitation model. Little information is available about alternative approaches to delivering simulation and the effect on student perceptions. The purpose of this study was to investigate the impact of implementing a method of co-facilitation on student perceptions of simulation effectiveness versus a traditional model and whether perceptions differ based on student level in the curriculum.

 

METHOD

The study design was retrospective with before and after comparison. In the pre-intervention simulation facilitation model, full-time and adjunct clinical course faculty prebriefed and debriefed in their areas of expertise with their respective clinical students, and the simulationist and faculty ran the simulation scenarios jointly. The simulation laboratory was staffed with full-time, trained, experienced simulationists who provided simulation across the curriculum by preparing scenarios, operating manikins, providing voice and some cues to students, and training faculty on simulation best practices. Simulationists at this institution were not considered faculty and did not evaluate student performance or teach content.

 

Implementation of the new model consisted of faculty and simulationists co-facilitating the entire simulation experience, from prebriefing through debriefing. Simulation scenarios were standardized, and no other aspects of the simulation experience were changed for this study. This study was approved by the university's institutional review board as exempt from oversight.

 

The Simulation Effectiveness Tool-Modified (SET-M; Leighton et al., 2015) assesses student perceptions of learning and confidence regarding a simulation experience using 19 items rated on a 3-point Likert scale; total scores range from 19 to 57 points. The SET-M, when tested with 1,165 nursing students, had construct validity and acceptable internal consistency for all four subscales (Prebriefing [alpha] = .833, Learning [alpha] = .852, Confidence [alpha] = .913, and Debriefing [alpha] = .908) as well as the overall instrument ([alpha] = .936; Leighton et al., 2015). Psychometric testing of the SET-M did not assess test-retest reliability (Leighton et al., 2015); in this study, each evaluation instance was intended to be a unique combination of the individual student and simulation experience for the course. Cronbach's alphas for the study sample were .770 pre-implementation and .768 post-implementation.

 

The simulation laboratory was located on a single campus of a multisite, national, three-year baccalaureate nursing program in a large Midwest metropolitan setting with year-round eight-week sessions. Students completed the SET-M after active participation in a required high-fidelity simulation and were categorized into levels based on progression through clinical courses. Beginner students taking fundamentals courses had zero to one previous simulation experience, intermediate students (medical-surgical) had between two and five previous simulation experiences, and advanced students (pediatrics, obstetrics, mental health, critical care) had between six and 13 previous simulation experiences. A convenience sample of all students in clinical courses was used to collect SET-M scores reflecting the traditional pre-implementation model from September 2016 to December 2016 (n = 236) and post-implementation of the interventional model from January 2017 to March 2017 (n = 298). No data related to gender, age, or previous experience were collected; data were submitted anonymously and pooled by level in the program.

 

Data were analyzed with the Statistical Package for Social Sciences Version 24. Mean and standard deviation values were calculated for each item and total score on the SET-M. The differences in mean scores were tested with independent t-tests. One-way analysis of variance was used to test differences in mean scores among the three levels of students. Post hoc testing of significant differences used the Bonferroni statistic with alpha corrected for multiple comparisons. All inferential tests were two-tailed, and the level of significance was .05.

 

RESULTS

For the entire sample, all differences in mean scores improved post-implementation, indicating a more positive evaluation of the experience during the post-implementation phase. The mean of the total SET-M score demonstrated a statistically significant increase from 51.9 (SD = 7.8) to 53.7 (SD = 6.1), t = 2.78, p = .006, 95% CI [0.052, 3.02].

 

Three examples of items with significant changes were as follows: 1) "I am more confident of my nursing assessment skills" (pre, M = 2.62, SD = 0.589; post, M = 2.78, SD = 0.442, t = 3.414), p < .001. 2) "I developed a better understanding of the pathophysiology" (pre, M = 2.64, SD = 0.571; post, M = 2.79, SD = 0.456, t = 3.443), p < .001. 3) "I developed a better understanding of medications" (pre, M = 2.60, SD = 0.584; post, M = 2.78, SD = 0.442, t = 3.878), p < .001.

 

Data were then analyzed for within-group changes by student level. Pre-implementation data included responses from 50 beginning, 67 intermediate, and 119 advanced students; post-implementation data included responses from 42 beginning, 99 intermediate, and 157 advanced students. All groups demonstrated increases in total SET-M scores (see Figure 1). Beginning and intermediate students demonstrated nonsignificant increases, whereas advanced students increased 2.605 (t = 2.6, p < .05). Beginning students demonstrated significant increases for items related to better understanding of pathophysiology (M = 0.196, p < .05) and medications (M = 0.262, p < .05); intermediate students had no statistically significant differences. Thirteen items significantly increased for advanced students, who reported increased confidence because of prebriefing (M = 0.099, p < .05) and 12 increases belonging to every statement pertaining to the scenario itself.

  
Figure 1 - Click to enlarge in new windowFigure 1. Change in mean values for total Simulation Effectiveness Tool-Modified (SET-M) from pre- to post-implementation times for each level of student.

Post-implementation mean scores were analyzed using one-way analysis of variance. Between beginning and intermediate students, only one difference was statistically significant, with intermediate students expressing an increased understanding of medications (p < .005). Compared to advanced students, beginning students felt better prepared to respond to client condition changes (p < .05) and also reported higher scores on all five debriefing statements (p values ranged from .003 to .017). Intermediate students demonstrated 12 statistically significant higher mean scores, as well as the overall SET-M score post-implementation compared to advanced students. Every statement related to the scenario itself, and none of the prebriefing or debriefing statements reflected a higher value for intermediate students than advanced students (p values ranged from .004 to .047). No statistically significant differences were found for the prebriefing questions between groups.

 

DISCUSSION

Overall, students perceived simulation to be more effective with co-facilitation. This supports the practice of combining faculty nursing experts with simulation specialists and is consistent with both the literature demonstrating the importance of structured debriefing by trained facilitators in achieving learning objectives and Kolb's ELT (Lee et al., 2020), which posits that the facilitator's role is to foster an environment for learning and create experiences to encourage understanding. Student self-reflection rating perception of simulation effectiveness is supported by the reflection portion of the ELT. Interestingly, every item assessing student perception of the effectiveness of the simulation scenario improved significantly with co-facilitation, although none of the general improvements in prebriefing or debriefing were statistically significant. Students may view prebriefing and debriefing portions of the SET-M as an evaluation of their clinical instructor, with whom they have a learning relationship prior to attending simulation and possibly did not perceive a change in learning.

 

Beginning students demonstrated a statistically significant increase in perceptions of effectiveness only for items related to understanding of both pathophysiology and medications. Beginning students, with little to no prior experience with nursing simulation to make comparisons between facilitation models, may have perceived less impact on their experience. Intermediate students scored items higher than the advanced group but demonstrated no statistically significant changes within their group, potentially because of high SET-M scores even in pre-implementation left less room for indicating improvement. Although advanced students scored lower on the SET-M than beginning and intermediate students, likely because of prior experience with simulation, they were able to derive greater benefits with co-facilitation. Based on these findings, co-facilitation techniques should be considered for use during simulation prebriefing and debriefing and can be prioritized for student groups who are most likely to benefit when resources are limited.

 

This study's retrospective design using anonymous evaluation data meant that changes in individual students' scores as they progressed through the program could not be tracked. The analysis of student groups also did not account for students who repeated courses and their associated simulations, although this number is typically small. In addition, the SET-M restricts students' answers to an integer on a scale of 1 to 3; a greater range of response options would better capture nuanced responses and more fully illustrate changes.

 

CONCLUSION

Co-facilitation presents an innovative approach to improving student perceptions of simulation effectiveness. Future research prospectively evaluating the effects of co-facilitation at other sites and for different populations of undergraduate nursing students is warranted. Further study may also determine whether continual co-facilitation is necessary throughout a degree program to maintain student perceptions of effectiveness over time, or if intervention at the beginner and/or intermediate level produces lasting results.

 

REFERENCES

 

Fey M. K., Jenkins L. S. (2015). Debriefing practices in nursing education programs: Results from a national study. Nursing Education Perspectives, 36(6), 361-366. [Context Link]

 

Forneris S. G., Neal D. O., Tiffany J., Kuehn M. B., Meyer H. M., Blazovich L. M., Holland A. E., Smerillo M. (2015). Enhancing clinical reasoning through simulation debriefing: A multisite study. Nursing Education Perspectives, 36(5), 304-310. [Context Link]

 

Hayden J. K., Smiley R. A., Alexander M., Kardong-Edgren S., Jeffries P. R. (2014). The NCSBN national simulation study: A longitudinal, randomized, controlled study replacing clinical hours with simulation in prelicensure nursing education. Journal of Nursing Regulation, 5(2), S3-S40. [Context Link]

 

INACSL Standards Committee (2016). INACSL standards of best practice: SimulationSM facilitation. Clinical Simulation in Nursing, 12, S16-S20. [Context Link]

 

Lee J., Lee H., Kim S., Choi M., Ko I. S., Bae J., Kim S. H. (2020). Debriefing methods and learning outcomes in simulation nursing education: A systematic review and meta-analysis. Nurse Education Today, 87, 104345. [Context Link]

 

Leighton K., Ravert P., Mudra V., Macintosh C. (2015). Updating the Simulation Effectiveness Tool: Item modifications and reevaluation of psychometric properties. Nursing Education Perspectives, 36(5), 317-323. [Context Link]

 

Moulton M. C., Lucas L., Monaghan G., Swoboda S. M. (2017). A CLEAR approach for the novice simulation facilitator. Teaching and Learning in Nursing, 12(2), 136-141. [Context Link]