Keywords

Clinical Competence, Clinical Judgment, Graduate Nurses, Simulation

 

Authors

  1. Cantrell, Mary Ann
  2. Mariani, Bette
  3. Lengetti, Evelyn

Abstract

AIM: This two-group feasibility study tested the efficacy of a four-scenario simulation program to improve clinical judgment and clinical competence among graduate nurses.

 

BACKGROUND: Clinical judgment and clinical competence are underdeveloped among new-to-practice nurses.

 

METHOD: Clinical judgment was compared between the intervention group (n = 17) and a control group (n = 26) in the practice setting at two time points.

 

RESULTS: The simulation program had a large effect on the intervention group's clinical judgment ([eta]2 = .143) and clinical competence ([eta]2 = .153). There were no statistically significant differences for each outcome at baseline as compared to the final scenario for the intervention group. There was no statistical significance between the intervention and control groups in clinical judgment in the practice setting at each measurement time.

 

CONCLUSION: Replication of the study with a randomized controlled trial and large sample size is warranted.

 

Article Content

Despite the National Council of State Boards of Nursing (NCSBN) (2012) and American Association of Colleges of Nursing (2008) identification that clinical judgment is essential in providing safe and effective care, clinical judgment is underdeveloped among new-to-practice nurses. In her landmark study, del Bueno (2005) reported fewer than 36 percent of graduate nurses possessed the competency to practice safely. Clinical preceptors consistently rank clinical judgment as a top priority clinical skill that needs to be developed in new-to-practice nurses (Nielsen et al., 2016). Lasater et al. (2015) reported that only 13 percent of preceptors believed that graduate nurses can correctly identify patient priorities, part of the process of clinical judgment. The low level of clinical judgment has direct implications for safe patient outcomes, with nearly 50 percent of new-to-practice RNs involved in patient safety events (Kim et al., 2016). Critical patient incidents have been the result of nurses' poor clinical judgment (Levett-Jones et al., 2010).

 

Clinical simulation learning experiences (SLEs) provide an evidence-based approach to enhance clinical judgment and clinical competence and engender safe practice behaviors among prelicensure nursing students (Hayden et al., 2014; Kim et al., 2016; Lee & Oh, 2015; Mancini et al., 2019). However, despite evidence, there is a gap in simulation science about how behaviors and skills learned in a simulated environment in prelicensure programs translate to actual clinical practice among new-to-practice nurses. Evidence to address this gap is emerging, yet findings of published studies are conflicting. In a recent systematic review to assess the efficacy of SLEs on patient safety outcomes, Lewis et al. (2019) concluded that, among the findings of 12 studies involving 844 acute care nurses, SLEs improved patient safety outcomes reported via self-report, direct observation, or clinical indicators. In contrast, Jung et al. (2017) conducted a quasi-experimental two-group multisite pilot study involving 24 graduate nurses in South Korea. Although statistically significant differences in communication skills were reported between the intervention and control groups (p = .005), no statistically significant differences were found between the groups in changes of nursing competency or critical thinking dispositions at the three-month follow-up measurement time.

 

An interactive SLE teaching program involving 16 novice nurse practitioners to assess its impact on clinical competence, confidence, and stress was conducted by Chen et al. (2017). Study participants in the intervention group scored significantly higher on the outcome measures of clinical competency (p = .001) and confidence (p < .05), as compared to the 15 participants in the control group. J. H. Kim et al. (2018) examined the effects of simulation-based versus peer-learning handover training on clinical judgment and clinical competence for 55 new graduate nurses. Nurses who received the SLE intervention (n = 28) showed statistically significant differences in clinical judgment (p = .033) and clinical competence (p = .020) regarding handovers one month after the study in an actual practice setting.

 

STUDY PURPOSE/THEORETICAL FRAMEWORK

The specific aims of this study were to 1) assess the extent that a program of safety-focused clinical SLEs can improve clinical competency and clinical judgment among graduate nurses, 2) assess the ability of graduate nurses to transfer their demonstrated clinical judgment skills from a simulated environment to a practice setting, and 3) evaluate clinical judgment in an actual practice setting between graduate nurses who participated in a program of safety-focused SLEs with a control group of graduate nurses who did not participate in these learning experiences.

 

The conceptual definition of clinical competency for this study was the definition used in the NCSBN National Simulation Study. Clinical competency was defined as the ability to "observe and gather information, recognize deviations from expected patterns, prioritize data, make sense of data, maintain a professional response demeanor, provide clear communication, execute effective interventions, and self-reflect for performance improvement within a culture of safety" (Hayden et al., 2011, p. S4). The definition of clinical judgment used in this study was also used in the NCSBN National Simulation Study by Hayden et al. (2011), who used the description from the International Association of Clinical Simulation and Learning: "The art of making a series of decisions in situations, based on various types of knowledge, in a way that allows the individual to recognize salient aspects of or changes in a clinical situation, interpret their meaning, respond appropriately, and reflect on the effectiveness of the intervention".

 

Tanner's integrative model of clinical judgment guided this study (C. A. Tanner personal communication, November 17, 2015). Tanner (2006) stated that the integrative model of clinical judgment can assist learners in improving clinical reasoning and development of clinical judgment by allowing recognition through reflection of areas failed in a practice and simulated setting. Reflection-in-action and reflection-on-action constitute a significant component of the model. The overall process of the model includes four aspects: 1) a perceptual grasp of the situation at hand, termed noticing; 2) developing a sufficient understanding of the situation to respond, termed interpreting; 3) deciding on a course of appropriate action, termed responding; and 4) attending to a patient's responses to the nursing action while in the process of acting, termed reflecting. The process concludes by reviewing the outcomes of the action, focusing on the appropriateness of all preceding aspects (e.g., what was noticed, how it was interpreted, and how the nurse responded; Tanner, 2006, p. 208). Each scenario in the clinical simulation program was intended to foster study participants' clinical judgment abilities by using these four aspects of Tanner's framework.

 

METHOD

This feasibility study used a partially randomized experimental interrupted time-series design with an intervention group and a control group to test the effectiveness of a safety-focused clinical simulation program. The study was conducted within a large health care center at five different affiliated hospitals. A longitudinal design was selected based on the findings of Blodgett et al. (2016), who called for further research that utilizes longitudinal rather than cross-sectional designs and SLE studies that employ participant random assignment. Likewise, Gough et al. (2012) stated that multisite, longitudinal studies are needed to provide evidence of transferability of skills developed in SLE to practice settings.

 

Initially, the study was a full randomized study. The first two cohorts of nurse residents enrolled were randomly assigned using a random number table; however, because of low recruitment and high attrition in the intervention group, the study design had to be changed to a partially randomized study. This change allowed study participants to choose their preferred group allocation and moderately improved recruitment and retention rates. Participants in the intervention group received the SLE program; the control did not receive any teaching-learning experiences and simply had their clinical judgment measured at the designated measurement times.

 

Sample and Setting

Participants were graduate nurses in a nurse residency program within a large, multisite, not-for-profit suburban health center system located in the mid-Atlantic region of the country. Three nurse residency cohorts were recruited for the study between September 2017 and March 2019. Recruitment took place during the systemwide initial meeting of the nurse residency program. BSN-prepared graduate nurses who had not been previously employed as an RN and were enrolled in a nurse residency program were eligible to participate in the study. Nurses who had been previously hired into an RN position at the study health care system or other institution and/or graduated from an associate degree in nursing program were excluded. Time to participate in the SLE was in addition to participants' scheduled work time and outside nurse residency activities. The final analytic sample size was 43, with 17 participants in the intervention group and 26 in the control group.

 

The setting was a simulation laboratory in a school of nursing located near all five patient care sites within the health care system. There was no formal contractual agreement between the health care system and the school of nursing. Participants' assessments of skill transfer of their clinical judgment from the simulation laboratory to the practice setting took place during scheduled work hours in their assigned clinical care practice areas. Measurement of clinical judgment, inclusive of those in the intervention and control groups, occurred at two measurement points: before and six months after the SLEs for each cohort. Institutional review board approval was obtained for the study.

 

Simulation Intervention Program

The intervention consisted of four steps: 1) prebriefing session, 2) implementation of a scenario, 3) debriefing session, and 4) self-rating by participants of their videotaped demonstrated performance. These steps were consistently followed for each scenario in the study. The intervention involved four patient care scenarios conducted approximately one month apart: a two-patient medication administration baseline scenario (Scenario 1), a young adult admitted with an exacerbation of Crohn's disease (Scenario 2), a middle-aged adult experiencing an acute gastrointestinal bleeding episode (Scenario 3), and an older adult with a disability admitted to the acute care setting with bilateral pneumonia (Scenario 4). These content areas were chosen to reflect clinical experiences and/or practice behaviors across most adult health acute care practice areas within the health care system where the participants were employed. A baseline scenario (Scenario 1) was conducted to assess the study outcomes among participants in the intervention group prior to having a debriefing session, which would have positively influenced their clinical judgment and clinical competence skills. All scenarios were pilot tested and validated as recommended by Shelestak and Voshall (2014). Each SLE was structured to reflect the 2016 INACSL Standards of Best Practice: Simulation (INACSL Standards Committee, 2016), addressed current Joint Commission National Patient Safety Goals, and incorporated Quality and Safety Education for Nurses teaching-learning activities/strategies and through a focus on clinical competency and clinical judgment.

 

The development of the simulation program for the study incorporated four of the six elements needed in systematic development of a nursing intervention as described by Whittemore and Gray (2001): 1) a well-defined problem; 2) a strong theoretical basis; 3) a research approach to establish the content, strength, and timing of an innovative and programmatic intervention; and 4) refinement of the intervention. To foster intervention fidelity adherence and competence of a simulation interventionist, a protocol was adapted using the framework of Ogrodniczuk and Piper (1999). This framework is incorporated into a scale, the Interpretive and Supportive Technique Scale, designed to measure interpretive and supportive features of individuals providing interventions. One simulation interventionist, a Certified Healthcare Simulation Educator(R), conducted all SLEs.

 

The study followed the INACSL Standards of Best Practice: Simulation Debriefing (INACSL Standards Committee, 2016) and used the structured Debriefing for Meaningful Learning(C) (DML) method. This method addresses the relationships between prior experiences; education; reflection; and the development of knowledge, skills, and attitudes necessary to be a nurse, defining each component as fluid, interactive, and important (Dreifuerst, 2015). Dreifuerst (2015) believes these components support development of metacognition and cultivate use of the nursing process, which in turn leads to a stronger conceptual understanding and application of nursing through clinical reasoning. The DML model includes the patient's story and clinical context; nursing process, knowledge, skills, and attitudes; opportunities for thinking-in-action, thinking-on-action, and thinking-beyond-action; and use of a facilitated debriefing process. Thinking-in-action occurs in the moment, as events are unfolding, and is the ability of professionals to think about what they are doing while they are doing it; thinking-on-action occurs retrospectively to identify what did and did not work well, what can be done differently, and how to use these experiences for future planning. Thinking-beyond-action uses assimilation and accommodation of clinical reasoning, clinical decision-making, and the circumstances in the present scenario and extends the thinking and reasoning to similar but different situations that could be encountered in the future (Dreifuerst, 2015). The principal investigator, the co-principal investigators, and the study's consultant and simulation interventionist were trained in use of the DML model and used it in a previous investigation (Mariani et al., 2013).

 

Nurses participated in the SLEs alone, and their performance was video-recorded. Following each debriefing session, participants in the intervention group watched the video recording of their performance and assessed their performance using the identified instruments to measure study outcomes.

 

Instruments

A researcher-developed demographics tool collected participant demographics and past clinical simulation experiences. Clinical competency was measured with the modified version of the Creighton Competency Evaluation Instrument (CCEI(R); Hayden et al., 2014); clinical judgment was measured with the Lasater Clinical Judgment Rubric (LCJR; Lasater, 2007). Two trained raters used these instruments to rate study participants' clinical judgment and competency via the video recordings; interrater reliability for the LCJR was 1.0. For the LCJR, Cronbach's alpha was .97; for the CCEI, Cronbach's alpha was .78. These findings indicated that each instrument had a high degree of internal consistency reliability in the measurement of their respective outcomes.

 

The modified version of the CCEI was tailored for each scenario and assessed behaviors in four domains: assessment, communication, clinical judgment, and patient safety (Hayden et al., 2014). No questions were added or deleted; only identifications of expected demonstrated behaviors within each domain were inputted for each scenario. These identifications were made to focus the raters' attention on behaviors participants were expected to demonstrate specific to each scenario. Behaviors were scored "0" if they were not demonstrated or "1" if demonstrated competency was noted. The scenarios varied on total possible scores as noted in Table 1. Because of each CCEI having a different range in possible scores, scores were calculated as a percentage of the total number of items scored so that they could be compared across the four measurement times. Possible ranges for the percent scores were 0 to 100. The reliability properties of the CCEI are well documented (Hayden et al., 2014; Hercinger et al., 2013).

  
Table 1 - Click to enlarge in new windowTable 1 Clinical Competency CCEI Scores by Scenario for the Intervention Group

Tanner's four dimensions of clinical judgment (noticing, interpreting, responding, reflecting) comprise the four subscales within the LCJR (Lasater, 2007). The LCJR is an observational measure that uses a rubric to rate the individual's demonstrated clinical judgment. It rates 11 behaviors: noticing, three behaviors; interpreting, two behaviors; responding, four behaviors; reflecting, two behaviors. Total scores on the LCJR range from 11 to 44, with scores ranging from 1 to 4 as follows: 1 = beginning, 2 = developing, 3 = accomplished, and 4 = exemplary. The interrater reliability properties of the LCJR are reported in several investigations (Jensen, 2013; Manetti, 2018).

 

RESULTS

Demographics

The mean age of study participants was 23.88 years (range, 22 to 32 years). The sample was mostly female (n = 35, 81.4 percent) and relatively homogenous with regard to race: white/Caucasian (n = 34, 79.1 percent) and Asian/Pacific Islander (n = 4, 9.3 percent). Only one participant was black; two identified as mixed races (4.7%, n = 2); and two did not report their race. The majority of study participants reported previous exposure to SLE as an undergraduate student with 76.8 percent reporting two to four exposures.

 

Descriptive Statistics for CCEI and LCJR

The descriptive statistics for the modified CCEI and LCJR for the intervention group are reported in Tables 1 and 2, respectively. Mean scores for the CCEI were comparable across all four scenarios, except for Scenario 1; this two-patient medication administration scenario was relatively simple with regard to clinical competence and involved skills that are routinely assessed and reassessed in most prelicensure programs. Likewise, mean total LCJR scores were comparable across all four measurement times, with the exception of Scenario 2. Overall LCJR mean scores and subscale scores indicate developing skills among the study participants. Mean scale scores for responding (deciding on a course of appropriate action) were highest across all four measuring points.

  
Table 2 - Click to enlarge in new windowTable 2 Clinical Judgment LCJR Scores by Scenario for the Intervention Group

The relationship between total scores for the LCJR and the modified CCEI was assessed at each measurement time (after each scenario) for the intervention group. Correlations between the two instruments were high and statistically significant at every measurement time, except Scenario 1. Cronbach's alphas for Scenario 1 were as follows: Time 1, r = .265, p = .007; Time 2, r = .870, p = .000; Time 3, r = .735, p = .001; and Time 4, r = .622, p = .008.

 

Effect Size

The overall effect size of the SLE on study participants' clinical judgment and clinical competency was assessed. A partial eta effect size of .143 was found for clinical judgment; the partial eta effect size for clinical competency was .153. These effect sizes are large, indicating the SLE program demonstrated a strong effect on improving both study outcomes.

 

AIM 1

A repeated-measures within-subject analysis of variance (RM-ANOVA) was conducted on the mean scores of the CCEI to assess if the SLE program improved clinical competency over time for intervention group participants. The RM-ANOVA analysis of CCEI scores across the four measurement times was statistically significant, F(3, 48) = 2.889, p = .045. A post hoc analysis with a Bonferroni adjustment identified that there was a statistically significant difference only between Scenario 1 and Scenario 2. The pairwise comparison between Scenario 1 and Scenario 4 was not statistically significant. Thus, it cannot be concluded that the SLE program increased intervention group participants' degree of demonstrated clinical competency over time.

 

A separate RM-ANOVA was conducted to determine if the SLE program had an effect on intervention group participants' clinical judgment. The RM-ANOVA for total LCJR scores was not statistically significant, F(3, 48) = 1.66, p = .188. However, the Reflecting scale had a significant F test across the four SLEs (p = .001). The mean for Reflecting in Scenario 2 was significantly different from that in Scenario 3 (p = .039, dz = 0.81) and between Scenario 2 and Scenario 4 (p = .003, dz = 1.22). These findings indicate that participants' clinical judgment in reflecting abilities did increase over time.

 

AIM 2

To assess if demonstrated clinical judgment among study participants in the intervention group was comparable in a simulated environment and a practice setting, a dependent t-test analysis was conducted between the mean LCJR score at Scenario 4 and the mean LCJR score in the clinical setting collected at least six months after the final SLE. Scenario 4 scores were compared to six-month scores based on the assumption that clinical judgment skills would be more developed at the completion of the SLE program and would provide the most accurate measurement of translation of this skill from the simulated to practice setting. There was no statistically significant difference found between these time points, which suggests that there was a comparable degree of clinical judgment among the intervention group study participants in both settings.

 

AIM 3

An independent t-test was performed between mean scores for the intervention group and the control group to determine if there were differences in clinical judgment measured six months after the completion of the entire SLE program. There were no statistically significant differences between the groups for clinical judgment.

 

A comparison of study participants' self-ratings and team members' ratings were compared for the LCJR and the CCEI. The findings of 2 x 4 mixed ANOVAs (study team rater vs. study participants) on the LCJR scale were statistically significant for every subscale and the total LCJR. Correlations between the study participants and study team rater were low and statistically nonsignificant for clinical judgment and clinical competency. The study team member rated participants' clinical judgment significantly lower than did the study participants rate themselves for each SLE. A detailed analysis of these findings is published elsewhere (Cantrell et al., 2020).

 

DISCUSSION

With regard to Aim 1, the analysis did not find a statistically significant change from Scenario 1 (baseline) to Scenario 4 in clinical judgment and clinical competency among the intervention study participants. The statistically significant change between Scenario 1 and Scenario 2 was likely due to Scenario 1's being less complex (two-patient medication scenario). In contrast, Scenario 2 study participants responded to multiple needs of the simulated patient that required more clinical judgment and competency skills. The findings for a lack of a statistically significant change from Scenario 1 to Scenario 4 findings are similar to those reported by Jung et al. (2017), who had a comparable sample size. However, they conflict with those of Chen et al. (2017), whose sample size was similar to this study, and those reported by J. H. Kim et al. (2018), whose sample size was 55.

 

A likely explanation for this study's findings is that the study was underpowered. This explanation is supported by large estimated effect sizes of the SLE on graduate nurses' clinical judgment and clinical competence. Effect sizes realized in this study are similar to those reported in the meta-analyses by Lee and Oh (2015) and Kim et al. (2016). Effect size estimates on these outcomes among new-to-practice graduate nurses in the published literature are scant; consequently, the findings of this study add important and underreported data in the simulation literature.

 

Aim 2 of the study examined if transfer of clinical judgment skills from a simulated setting to a practice setting occurred among study participants in the intervention group. Although the findings do support comparable degrees of clinical judgment between both settings, this finding needs to be made with great caution. Of concern was the inability to formally assess interrater reliability of the clinical preceptors completing the LCJR scores for participants in both groups.

 

The study findings that did not show a statistical significance between the intervention group and the control group for clinical judgment in the practice settings, which addressed Aim 3, may also be a result of the small size. Alternatively, actual practice experiences that required the application of clinical judgment and clinical competency skills may have had an influence in this finding.

 

The findings that compared the study participants' ratings versus those of study team members are consistent with previous studies (Adamson et al., 2013; Fenske, 2013; Jensen, 2013). The evidence is consistent that expert raters evaluate learners' demonstrated skill level lower than learners' self-ratings. Although self-assessment is an important component in professional development and lifelong learning, the discrepancy is an important issue across all levels of nursing education. Post-SLE debriefing with a comparison of ratings may provide real-time feedback to participant performance and serve as remediation for errors in practice.

 

STUDY STRENGTHS AND LIMITATIONS

Strengths of the study include the strong psychometric properties of the LCJR and the CCEI, interrater reliability estimate, and the collection of actual demonstrated abilities of the study's outcomes versus participant self-ratings. The large effect size estimates of the SLE program on study outcomes provide useful data for future research. The measurement of outcomes in a clinical setting to assess transfer of skills from a simulated to an actual setting reflects a priority of (Franklin & Luctkar-Flude, 2020) and the National League for Nursing (2020).

 

Major limitations of the study were the result of the modifications to the study design based on competing priorities within the partnering health care system. A detailed account of these limitations is published elsewhere (Cantrell et al., 2020). Of significance was the need to change the study design from a true experimental study to a partially randomized study, which introduced the internal validity threat of selection bias. Despite providing an incentive of $30 per time to those who consented to participate in the intervention group, recruitment was low, and attrition was high. A total of eight study participants who enrolled and were assigned to the intervention group did not return emails or phone calls to schedule their simulations or resigned from the health system. These participants were dropped from the study. Five participants in the intervention group expressed a concern about completing the study in an offsite location; to keep them enrolled in the study, they were offered the choice to be placed in the control group. Because of enrollment in the study being low, nurse residents at the third recruitment time were offered the choice of whether to be in the intervention or control group.

 

Despite the large effect size estimates of the SLE program for both study outcomes, the sample size for the intervention group caused the study to be underpowered to produce statistically significant findings. Both the low recruitment rate and high rate of attrition in the intervention group were study limitations. Finally, there were no interrater reliability estimates for use of the LCJR among the preceptors who conducted the assessments. Having study team raters do these ratings would have provided more reliable data. In addition, clinical competence was not measured in actual practice, which would have provided important data about the transfer of learned skills to the practice setting.

 

CONCLUSION

A program of SLE appears to have efficacy in developing clinical judgment and clinical competence among graduate nurses. Clinical judgment and clinical competency skills appear to have a strong positive relationship and likely develop in a systematic fashion. The LCJR and the CCEI both have strong internal consistency reliability in their use among graduate nurses. Given the large effect size estimates of the simulation program on this study's outcomes, the study should be replicated using a randomized controlled trial design with a large sample size to measure clinical judgment and clinical competence and assessment of skill transfer from the simulated to practice setting.

 

REFERENCES

 

Adamson K. A., Gubrud P., Sideras S., Lasater K. (2013). Assessing the reliability, validity, and use of the Lasater Clinical Judgment Rubric: Three approaches. Journal of Nursing Education, 51(2), 66-73. [Context Link]

 

American Association of Colleges of Nursing. (2008). The essentials of baccalaureate education for professional nursing practice. https://www.aacnnursing.org/portals/42/publications/baccessentials08.pdf[Context Link]

 

Blodgett T. J., Blodgett N. P., Bleza S. (2016). Simultaneous multiple patient simulation in undergraduate nursing education: A focused literature review. Clinical Simulation in Nursing, 12, 346-355. [Context Link]

 

Cantrell M. A., Mariani B., Lengetti E. (2020). The Realities of Collaboration: An Academic and Practice Partnership in Simulation Education with Nurse Residents. Journal for Nurses in Professional Development, 35(6), 345-348. [Context Link]

 

Chen S. H., Chen S. C., Lee S. C., Chang Y. L., Yeh K. Y. (2017). Impact of interactive situated and simulated teaching program on novice nursing practitioners' clinical competence, confidence, and stress. Nurse Education Today, 55, 11-16. [Context Link]

 

del Bueno D. (2005). A crisis in critical thinking. Nursing Education Perspectives, 26(5), 278-282. [Context Link]

 

Dreifuerst K. T. (2015, May). Getting started with debriefing for meaningful learning. Clinical Simulation in Nursing, 11(5), 268-275. 10.1016/j.ecns.2015.01.005. [Context Link]

 

Fenske C. L., Harris M. A., Aerbersold M. L., Hartman L. S. (2013). Perception versus reality: A comparative study of the clinical judgment skills of nurses during a simulated activity. The Journal of Continuing Education in Nursing, 44(9), 399-405. [Context Link]

 

Franklin A., Luctkar-Flude M. (2020). 2020 to 2023 Research Priorities Advance INACSL Core Values. Clinical Simulation in Nursing, 47, 82-83. [Context Link]

 

Gough S., Hellaby M., Jones N., MacKinnon R. (2012). A review of undergraduate interprofessional simulation-based education (IPSE). Collegian, 19(3), 153-170. [Context Link]

 

Hayden J. K., Jeffries P. J., Kardong-Edgren S., Spector N. (2011). The national simulation study: Evaluating simulated clinical experiences in nursing education [Unpublished research protocol, National Council of State Boards of Nursing]. [Context Link]

 

Hayden J. K., Smiley R. A., Alexander M., Kardong- Edgren S., Jeffries P. R. (2014). The national simulation study: A longitudinal, randomized, controlled study replacing clinical hours with simulation in pre-licensure nursing education. Journal of Nursing Regulation, 5(2S), S3-S40. [Context Link]

 

Hercinger M., Manz J., Parsons M. (2013). Creighton simulation evaluation model. https://nursing.creighton.edu/academics/competency-evaluation-instrument[Context Link]

 

INACSL Standards Committee. (2016). INACSL standards of best practice: SimulationSM. Participant evaluation. Clinical Simulation in Nursing, 12, S26-S29. https://www.nursingsimulation.org/article/S1876-1399(16)30130-X/pdf[Context Link]

 

Jensen R. (2013). Clinical reasoning during simulation: Comparison of student and faculty ratings. Nurse Education in Practice, 13, 23-28. [Context Link]

 

Jung D., Lee S. H., Kim J. H. (2017). Development and evaluation of a clinical simulation for new graduate nurses: A multi-site pilot study. Nurse Education Today, 49, 84-89. [Context Link]

 

Kim J. H., Hur M. H., Kim H. Y. (2018). The efficacy of simulation-based and peer-learning handover training for new graduates. Nurse Education Today, 69, 14-19. [Context Link]

 

Kim M. Y., Kim M. Y., Kang S. W. (2016). A survey and multilevel analysis of nursing unit tenure diversity and medication errors. Journal of Nursing Management, 24, 634-645. [Context Link]

 

Lasater K. (2007). Clinical judgment development: Using simulation to create an assessment rubric. Journal of Nursing Education, 46(11), 496-503. [Context Link]

 

Lasater K., Nielsen A. E., Stock M., Ostrogorsky T. L. (2015). Evaluating the clinical judgment of newly hired staff nurses. The Journal of Continuing Education in Nursing, 46(12), 563-571. [Context Link]

 

Lee J., Oh P. J. (2015). Effects of the use of high-fidelity human simulation in nursing education: A meta-analysis. Journal of Nursing Education, 54(9), 501-507. [Context Link]

 

Levett-Jones T., Hoffman K., Dempsey J., Jeong S. Y. S., Noble D., Norton C. A., Roche J., Hickey N. (2010). The 'five rights' of clinical reasoning: An educational model to enhance nursing students' ability to identify and manage clinically 'at risk' patients. Nurse Education Today, 30(6), 515-520. [Context Link]

 

Lewis K. A., Ricks T. N., Rowin A., Chip N., Goldstein L., McElvogue C. (2019). Does simulation training for acute care nurse improve patient safety outcomes: A systematic review to inform evidence-based practice. Worldwide Views on Evidence-Based Nursing, 16(5), 389-396. [Context Link]

 

Mancini M. E., LeFlore J. L., Cipher D. J. (2019). Simulation and clinical competency in undergraduate nursing programs: A multisite prospective study. Journal of Nursing Education, 58(10), 561-568. [Context Link]

 

Manetti W. G. (2018). Evaluating the clinical judgment of the prelicensure nursing students in the clinical setting. Nurse Educator, 43(5), 272-276. [Context Link]

 

Mariani B., Cantrell M. A., Meakim C., Prieto P., Dreifuerst K. T. (2013). Structured debriefing and students' clinical judgment abilities in simulation. Clinical Simulation in Nursing Education, e1-e9. [Context Link]

 

National Council of State Boards of Nursing. (November 10, 2012). Section III 2012 NCSBN annual meeting: Report of the model act and rules committee. https://www.ncsbn.org/AttachmentA.pdf[Context Link]

 

National League for Nursing. (2020). NLN research priorities 2020-2023. The NLN research priorities in nursing education (2020-2023). http://www.nln.org/docs/default-source/Research-Grants/nln-research-priorities-i[Context Link]

 

Nielsen A., Lasater K., Stock M. (2016). A framework to support preceptors' evaluation and development of new nurses' clinical judgment. Nurse Education in Practice, 19, 84-90. [Context Link]

 

Ogrodniczuk J. S., Piper W. E. (1999). Measuring therapist technique in psychodynamic psychotherapies development and use of a new scale. Journal of Psychotherapy Practice and Research, 8(2), 142-154. [Context Link]

 

Shelestak D., Voshall B. (2014). Examining validity, fidelity, and reliability of human patient simulation. Clinical Simulation in Nursing, 10(5), e257-e260. [Context Link]

 

Tanner C. A. (2006). Thinking like a nurse: A research-based model of clinical judgment in nursing. Journal of Nursing Education, 6(45), 204-211. [Context Link]

 

Whittemore R., Grey M. (2001). The systematic development of nursing interventions. Journal of Nursing Scholarship, 34(2), 115-120. [Context Link]