Authors

  1. Harper, Mary G. PhD, RN, NPD-BC
  2. Bodine, Jennifer DNP, FNP-C, NPD-BC, CEN
  3. Monachino, AnneMarie DNP, RN, CPN, CHSE-A, NPD-BC

Abstract

Representatives of three international associations reviewed literature published from 2009 to 2018 to ascertain the effectiveness of simulation use in transition to practice programs for newly licensed registered nurses (NLRNs). A review of nine quantitative studies demonstrated that simulation positively influences NLRN self-perception of skills, competence, readiness for practice, and confidence. However, evidence of objective measures of NLRN competence and the impact of simulation on patient and organizational outcomes was lacking.

 

Article Content

Newly licensed registered nurses (NLRNs) transitioning from academia to the practice setting may experience challenges because of gaps between the integration of knowledge acquired in the classroom and application of skills at the bedside, which in turn can result in low morale, high turnover, and patient safety issues (Hickerson et al., 2016). Furthermore, the Advisory Board (2019) posits that novice nurses struggle with transition to practice (TTP) due to increasing patient complexity and a lack of experienced nurses-primarily because of retirement-to provide the expertise needed to develop acceptable levels of competence. The resulting "experience-complexity gap" (Advisory Board, 2019, p. 4) has the potential to compromise patient safety.

 

As a result of both the academic-practice and experience-complexity gaps, TTP programs extending 6-12 months after hire are recommended for all new graduate nurses (Benner et al., 2010; Institute of Medicine, 2010; National Council of State Boards of Nursing, 2008; Spector et al., 2015; The Joint Commission, 2002). Although academic nursing programs have demonstrated the effectiveness of simulation in psychomotor skill development, communication techniques, clinical reasoning, and decision-making (Thomas & Mraz, 2017), the extent and effectiveness of simulation use in TTP programs are unknown.

 

In December 2018, three organizations-the Association for Nursing Professional Development, the Society of Simulation in Healthcare Nursing Section, and the International Nursing Association for Clinical Simulation and Learning (INACSL)-collaborated to explore the use of simulation in TTP programs and to identify best practices and gaps in the literature. Subject matter experts from all three organizations reviewed the literature, appraised the evidence, and made recommendations based on the evidence. See Table 1 for a list of team members who contributed to these efforts. Given the expense of simulation program development and maintenance, the task force focused on quantitative research to examine objective measures for justification of simulation's use in TTP programs.

  
Table 1 - Click to enlarge in new windowTABLE 1 Simulation in Transition to Practice Task Force Members

INQUIRY QUESTION

The guiding question for this review of the literature was: Is simulation use in TTP programs for NLRNs effective in creating positive outcomes for NLRNs, healthcare organizations, and patients?

 

DEFINITIONS

This literature review focuses on the use of simulation in TTP programs. TTP programs are formal programs that support the movement of new nurses within the first 12 months of practice from the role of student to the role of professional nurse and exceed the routine organizational orientation provided to all nursing staff (American Nurses Credentialing Center, 2016; Warren et al., 2018). Simulation includes any "technique that creates a situation or environment to allow persons to experience a representation of a real event for the purpose of practice, evaluation, testing, or to gain understanding of systems or human actions" (Lioce, 2020, p. 44). These simulation techniques comprise the use of high-, medium-, and low-fidelity manikins; task trainers; augmented reality; virtual reality; standardized patients; or a combination of techniques (Harper et al., 2018).

 

INCLUSION AND EXCLUSION CRITERIA

For inclusion in this review, articles must have addressed the use of simulation in a TTP program for new registered nurses (RNs) transitioning from an academic setting to a clinical setting. Only published quantitative research studies in English were considered for inclusion. Studies focused on simulation in the academic setting were excluded, as were those studies using simulation for education outside TTP programs and unpublished dissertations. Studies on simulation in TTP programs that included experienced nurses were excluded unless the new RNs' results were reported separately.

 

METHODOLOGY/SAMPLE

Searches were conducted in CINAHL Plus and MEDLINE for articles published in the past 10 years-from January 1, 2009, to December 31, 2018. EBSCO was searched using the search terms new graduate nurses, novice nurses, nurs*, graduat*, *new nurse, novice*, inexperienc*, *recent, newcomer, entry-level, young*, x-reality, virtual reality, augmented reality, and simulation. A total of 351 articles were located. After reviewing abstracts and removing duplicates and abstracts that clearly did not meet the inclusion criteria, 138 full-text articles were reviewed. Of these, 57 were not quantitative research, 61 did not meet the inclusion criteria, and 11 were dissertations and were removed, leaving 9 articles for inclusion in this review. These studies are described in Table 2.

  
Table 2 - Click to enlarge in new windowTABLE 2 Quantitative Research on Simulation Use in Transition to Practice Programs With New Graduate Nurses (2009-2018)

REVIEW OF LITERATURE

Of the quantitative studies measuring the effectiveness of simulation use in NLRN residency programs found in the literature, many relied on self-report, use of researcher-developed instruments, or both. For example, Beyea et al. (2010) developed self-report instruments for global confidence, competence, and readiness for practice using a 10-point visual analog scale. No validity or reliability were reported for the global scale. Self-efficacy was measured using the Nurse Resident's Readiness for Entry Into Practice, a modified version of the Self-Efficacy for Professional Nursing Competencies Instrument. Reliability using Cronbach's alpha was .97-.98. The self-report instruments were augmented by a simulation educator, unit educator, and preceptor evaluation of residents' competence using the researcher-developed Structured Simulation Clinical Scenario Evaluation (SSCS), which evaluated "critical and expected behaviors" (p. e173). Validity and reliability for this scale were not reported. In their 3-year study of 17 cohorts of nurse residents (n = 260), Beyea et al. found significant increases in all self-reported measures of confidence, competence, readiness for practice, and self-efficacy. Although results of the SSCS were not reported, the length of orientation was decreased, and 2-year retention of NLRNs reached "historic" (p. e174) levels resulting in cost avoidance of $3,542,000.00 over a 3-year period. Although these results illuminate the value of a nurse residency program using simulation, the influence of the "novel" residency program cannot be separated from the impact of simulation.

 

Boling et al. (2016) also used author-developed instruments to investigate the impact of simulation as a teaching modality during a cardiothoracic intensive care unit internship. Using a convenience sample of 10 NLRNs, learning and confidence were assessed before and after simulation. Learning was assessed with a 10-item multiple-choice knowledge test (MCKT) developed by the research team. Experienced cardiothoracic intensive care unit nurses and physicians confirmed face validity; however, no psychometric testing was reported. Participants' confidence was measured using a modified self-efficacy tool, based on a general scale previously developed by others. No validity or reliability was reported. In addition, a simulation effectiveness tool (SET), "a validated 13-item questionnaire" (p. 771) with knowledge and confidence subscales, evaluated the nurse interns' perceptions. No citation, reliability, or validity measures were provided for the SET. Although MCKT and modified self-efficacy scores improved significantly postsimulation, no correlation was found between the MCKT and the knowledge domain of the SET. The significant increase in MCKT scores and lack of correlation with the SET may be reflective of the researcher-developed tool and a bias toward teaching to the test.

 

Rhodes et al. (2016) measured knowledge, confidence, and satisfaction using investigator-developed scales. In a descriptive study with a convenience sample of 93 NLRNs, dependent variables were measured at baseline and immediately following a nurse-only simulation, an interprofessional simulation, and again at 6, 12, and 18 months. Although the researchers provided evidence of validity and reliability, full psychometric testing typically associated with new research instruments is missing. Although the study was adequately powered at the outset, less than 10 participants remained at 18 months. No significant differences were noted in confidence after the interprofessional simulation or in knowledge after either the nurse-only or interprofessional simulation. Confidence significantly increased for the nursing-only simulation from baseline to 18 months and NLRNs indicated significantly higher satisfaction with the interprofessional simulation when compared to the nurse-only simulation. The high attrition may have accounted for the lack of significant findings at 6, 12, and 18 months. In addition, clinical experience, as opposed to simulation experience, may account for the increase in confidence at 18 months.

 

Chen et al. (2017) investigated the use of "interactive situated and simulated teaching (ISST)" (p. 11) with novice graduate nurses with no prior clinical experience. In their quasi-experimental study, a convenience sample of 31 new nurses at one hospital was randomly assigned to either the intervention group (n = 16) or the control group (n = 15). Both groups attended the standard 5-day hospital orientation program. The intervention group subsequently participated in the ISST program, which consisted of six face-to-face sessions over the next 3 months. Prior to and at the end of the 3-month ISST program, both the intervention and control groups completed an investigator-developed survey that measured perceived competency, stress, and satisfaction with learning. Although face validity was assessed, results of reliability testing were not reported. No significant differences were noted between the intervention and control groups at baseline. At 3 months, however, the intervention group scored significantly higher in overall competency-particularly the subdimensions of medical knowledge, skill, critical thinking, and physical assessment. No significant difference in the competency subdimension of communication was found between groups. Moreover, the intervention group scored significantly lower on the stress scale and significantly higher on the confidence scale at 3 months. The intervention group also reported higher levels of satisfaction with learning than the control group. These results are reasonable since the intervention group received ongoing education following hospital orientation. The results might be more attributable to simulation if the control group also received additional face-to-face educational opportunities pursuant to general hospital orientation.

 

Several reviewed studies used existing survey instruments as opposed to creating new measurement tools. For example, in 2009, Anderson et al. performed a mixed-methods pretest-posttest descriptive study to determine the effectiveness of a redesigned nurse residency program that included simulation. Effectiveness was measured using the hospital system's employee engagement survey, the Halfer-Graf Job/Work Environment Nursing Satisfaction Survey, and the Nurse Residency Teaching Strategy Effectiveness Survey, all of which were administered before and after the nurse residency program. In a convenience sample of 90 NLRNs attending three separate TTP programs in a five-hospital system, little change was found in the engagement of NLRNs with the exception of one group of nurse residents who felt "more positively" (p. 168). In addition, results of the Halfer-Graf survey demonstrated NLRNs "significantly perceived that they were able to perform their job, identify resources, understand performance expectations, accomplish work tasks, and manage the demands of the job effectively" (p. 168). Although no statistical analysis was provided, 1-year retention increased marginally, and 2-year retention did not change. The lack of statistical detail limits critical appraisal of this study.

 

Maneval et al. (2012) used the Health Sciences Reasoning Test and Clinical Decision-Making in Nursing Scale to measure the impact of simulation on critical thinking and decision-making in new graduate nurses. The authors used the term "orientation" as opposed to TTP or residency, but the 10-week duration of this program allowed for its inclusion in this review. Using a quasi-experimental pretest-posttest design, a convenience sample of 26 graduate nurses with no prior work experience as a graduate nurse was randomly assigned to standard orientation (control group) or orientation incorporating high-fidelity simulation (intervention group). The pretest was administered within the first 2 weeks of orientation, and the posttest was administered upon completion of the 10-week orientation. No reliability was reported for this sample. Although scores on both instruments increased for both groups from pretest to posttest, no significant differences were noted between the two groups for either measurement. However, one subscale in the Health Sciences Reasoning Test analysis increased significantly for the total sample of participants from pretest to posttest. No significant differences were noted in new graduates with an associate degree and those with a bachelor's degree. The lack of significance may be attributed to the small sample size, resulting in a Type II error. No power analysis was reported to determine the appropriate sample size for this study.

 

Jung et al. (2017) used a quasi-experimental, pretest-posttest design for a pilot study to assess the effect of simulation on new graduates' critical thinking, communication, and competency. A convenience sample from four hospitals consisting of 48 new graduates, with less than 6 months of experience, were divided into a control group that received "ordinary hospital education" (p. 86) and an intervention group that participated in four simulation scenarios and debriefing. Data were collected from both groups at baseline and 3 months after program completion. The Holistic Nursing Competence Scale, the Critical Thinking Disposition Survey, and the Interpersonal Communication Competence Scale all demonstrated reliability with this sample. Although no significant differences on any of the scales were noted between groups at baseline, at 3 months, the intervention group scored significantly higher on communication skills than the control group. A lack of significant difference in critical thinking mirrors the findings of Maneval et al. (2012). As with Maneval et al., Jung et al. did not report a power analysis, thereby suggesting that the sample size may have been insufficient to note significant differences.

 

Everett-Thomas et al. (2015) assessed the impact of simulation during a nurse residency program on applied behaviors of NLRNs in the simulation setting. Where other studies reported thus far used primarily self-reported measures, Everett-Thomas et al. used more objective measurements. Over 3 years of nurse residency cohorts, 98 NLRNs were randomly assigned to groups of no more than six residents. These groups participated in weekly high-fidelity simulation sessions in which critical patient situations occurred. Sessions were videotaped and scored by one reviewer using a modified simulator session crisis management skills checklist. All simulation teams demonstrated significant improvement in the modified crisis management skills checklist between the first and fifth week. Although this study demonstrated improved performance in the simulation setting, group performance was measured instead of individual performance. In addition, no follow-up of performance in the clinical setting was conducted to see if the improvements in the simulation laboratory carried over to the patient care setting.

 

Roche et al. (2013) used a quasi-experimental design in a pilot study to compare outcomes in new graduate nurses who completed case studies with those who completed high-fidelity simulation using the same scenarios. They used objective measurements, including clinical performance behaviors, performance reports from managers, and retention data for outcomes measurement. Of the studies included in this review, this study is the only one that objectively measured clinical performance in the patient care setting. A convenience sample of 20 new graduate nurses was randomly assigned to the intervention group (n = 11), which participated in high-fidelity simulation, or the control group (n = 9), which completed case studies. Each group completed one simulation or case study, respectively, per week for 5 weeks. Pursuant to orientation, each group participated in a high-fidelity simulation session during which performance was measured using the New Graduate Nurse Learning Behavior Rating Scale. The scale was completed by impartial nursing professional development (NPD) practitioners who checked off observed behaviors. The same NPD practitioners performed and recorded all observations. Although both the control and intervention groups scored similarly on the performance behavior scale, the simulation group scored higher on correcting errors, although statistical significance was not achieved. One year later, participant performance was observed in the clinical setting as part of medication bar code implementation. Participants were unaware that they were being scored in key performance behaviors, such as handwashing and patient identification. No differences were noted between the two groups. In addition, no difference was noted between the groups in their 1-year manager's evaluation, and retention for both groups was 100%. The key limitation of this study was the small sample size, which may have resulted in a Type II error.

 

DISCUSSION

This review of quantitative studies evaluating the outcomes of simulation in TTP programs for NLRNs demonstrates a lack of evidence, suggesting significant benefits of using simulation as a teaching modality for TTP programs. This lack of evidence results from inadequate rigor related to insufficient sample sizes, failure to use established valid and reliable measurement instruments, and heavy reliance on NLRN self-report.

 

Small sample sizes are evident in the most studies reviewed. Only Rhodes et al. (2016) reported power analyses that reflected adequate sample sizes. Rhodes et al. reported mixed results in self-reported measures. Kim et al. (2018) noted no significant difference between control (peer learning for handoff report) and intervention (simulation learning for handoff report) groups at baseline, which may be attributed to a Type II error. However, the intervention group scored significantly higher in competence with providing handoff reports at 1 month, suggesting a positive latent effect of simulation. Others, such as Beyea et al. (2010) and Everett-Thomas et al. (2015), reported sample sizes of 260 and 98, respectively, and noted significant findings, thereby suggesting adequate sample sizes. Anderson et al. (2009), on the other hand, indicated a sample size of 90 and referred to significance of results without providing statistical evidence to substantiate their assertion.

 

In addition to small sample sizes, most studies used author-created surveys and knowledge assessment tests as opposed to established, psychometrically tested instruments (Beyea et al., 2010; Boling et al., 2016; Chen et al., 2017; Kim et al., 2018; Rhodes et al., 2016). Many authors failed to report validity and reliability of the measurement tools used (Anderson et al., 2009; Boling et al., 2016; Everett-Thomas et al., 2015; Roche et al., 2013). Lack of use of established instruments makes it difficult to compare results and determine the validity of the results. Furthermore, author-developed knowledge assessment tools may introduce bias related to the alignment of teaching content to the items on the test.

 

Only four studies used objective measurements of performance (Beyea et al., 2010; Everett-Thomas et al., 2015; Roche et al., 2013). Although Beyea et al. (2010) used the SSCS to evaluate NLRN performance, the results of this measure were not reported. Everett-Thomas et al. (2015) used an abbreviated version of the simulator session crisis management skills checklist and found significant, but nonlinear, improvement over a 5-week period. These investigators assessed group performance as opposed to individual performance, limiting conclusions about individual improvement. Finally, Roche et al. (2013) used the New Graduate Nurse Learning Behavior Rating Guide. Two clinical educators evaluated all participants but found no significant difference between the group that completed case studies and the group that completed simulation activities on the same topic. This lack of significance may be due to the small sample size.

 

Although qualitative studies were not included in this review, several findings align with findings of the aforementioned quantitative studies. Bailey and Mixer (2018), Kaddoura (2010), Rossler et al. (2018), and Lawrence et al. (2018) found that NLRNs who participated in simulation during their TTP program believed simulation increased their professional confidence. These findings are consistent with those of Beyea et al. (2010), Chen et al. (2017), and Rhodes et al. (2016). In addition, Kaddoura (2010), Mowry and Crump (2013), Murphy and Janisse (2017), and Rossler et al. (2018) reported that NLRNs' perceived simulation enhanced their abilities, competence, and readiness to practice-themes that agree with the findings of Beyea et al. (2010) and Boling et al. (2016). Finally, qualitative studies showed that new graduate nurses value simulation as a tool for a successful TTP (Bailey & Mixer, 2018; Kaddoura, 2010; Murphy & Janisse, 2017).

 

Findings of this review of literature are consistent with those of Pogue and O'Keefe (2021), who conducted an integrative review of qualitative and quantitative studies from 2005 to 2017 to determine the effect of simulation on new graduate nurses' orientation. They identified a "paucity of research of qualitative and quantitative outcomes of simulation-enhanced orientation" (p. 150).

 

IMPLICATIONS FOR NPD PRACTICE

Synthesis of the available quantitative evidence, shown in Table 3, indicates that simulation use in TTP programs positively influences NLRN self-perception of skills, competence, readiness for practice, and confidence. Evidence of objective measures of NLRN competence and the impact of simulation on patient and organizational outcomes is lacking.

  
Table 3 - Click to enlarge in new windowTABLE 3 Synthesis of Findings

In the cost-conscious healthcare environment, NPD practitioners are obliged to demonstrate resource stewardship. NPD practitioners should consider the use of less expensive, low- or medium-fidelity simulation in lieu of expensive high-fidelity simulators. This stewardship also includes demonstration of measurable outcomes of all programs-especially those requiring capital expenditures such as high-fidelity simulation equipment. Beyea et al. (2010) reported substantial positive financial outcomes but did not differentiate whether the new TTP program, simulation, or the combination of the two resulted in the savings. Regardless, their example of organizational impact is commendable and should be followed.

 

To demonstrate organizational impact, NPD practitioners must carefully collect and maintain data using evidence-based instruments to measure outcomes of their practice. The INACSL (n.d.) website offers information about existing skills and competency assessment tools for use to measure the effectiveness of simulation. In addition, NPD practitioners can collaborate with quality improvement and risk management departments to ascertain the impact of NPD initiatives on patient outcomes.

 

CONCLUSION

High-quality, large-scale studies are needed to evaluate the effectiveness of simulation use in TTP programs for NLRNs. These studies should include adequate sample sizes as demonstrated by power analysis and use valid and reliable, objectively scored instruments to score NLRN performance in the simulation setting as well as in the patient care setting. The boards of directors of the Association for Nursing Professional Development, the INACSL, and the Society of Simulation in Healthcare formally support this recommendation.

 

References

 

Advisory Board (2019). The experience-complexity gap. Advisory Board Nursing Executive Center. [Context Link]

 

American Nurses Credentialing Center (2016). Practice transition accreditation program: 2016 Application manual. Author. [Context Link]

 

Anderson T., Linden L., Allen M., Gibbs E. (2009). New graduate RN work satisfaction after completing an interactive nurse residency. The Journal of Nursing Administration, 39(4), 165-169. . [Context Link]

 

Bailey C. A., Mixer S. J. (2018). Clinical simulation experiences of newly licensed registered nurses. Clinical Simulation in Nursing, 15, 65-72. 10.1016/j.ecns.2017.11.006. [Context Link]

 

Benner P., Sutphen M., Leonard V., Day L. (2010). Educating nurses: A call for radical transformation. Jossey-Bass. [Context Link]

 

Beyea S. C., Slattery M. J., von Reyn L. J. (2010). Outcomes of a simulation-based nurse residency program. Clinical Simulation in Nursing, 6(5), e169-e175. 10.1016/j.ecns.2010.01.005. [Context Link]

 

Boling B., Hardin-Pierce M., Jensen L., Hassan Z. U. (2016). Evaluation of a high-fidelity simulation training program for new cardiothoracic intensive care unit nurses. Seminars in Thoracic and Cardiovascular Surgery, 28(4), 770-775. 10.1053/j.semtcvs.2016.11.001. [Context Link]

 

Chen S. H., Chen S. C., Lee S. C., Chang Y. L., Yeh K. Y. (2017). Impact of interactive situated and simulated teaching program on novice nursing practitioners' clinical competence, confidence, and stress. Nurse Education Today, 55, 11-16. 10.1016/j.nedt.2017.04.025. [Context Link]

 

Everett-Thomas R., Valdes B., Valdes G. R., Shekhter I., Fitzpatrick M., Rosen L. F., Arheart L. F., Birnbach D. J. (2015). Using simulation technology to identify gaps between education and practice among new graduate nurses. The Journal of Continuing Education in Nursing, 46(1), 34-40. 10.3928/00220124-20141122-01. [Context Link]

 

Harper M. G., Gilbert G. E., Gilbert M., Markey L., Anderson K. (2018). Simulation use in acute care hospitals in the United States. Journal for Nurses in Professional Development, 34(5), 242-249. . [Context Link]

 

Hickerson K. A., Taylor L. A., Terhaar M. F. (2016). The preparation-practice gap: An integrative literature review. The Journal of Continuing Education in Nursing, 47(1), 17-23. 10.3928/00220124-20151230-06. [Context Link]

 

Institute of Medicine. (2010). The future of nursing: Leading change, advancing health. http://nationalacademies.org/hmd/reports/2010/the-future-of-nursing-leading-chan. [Context Link]

 

International Nursing Association for Clinical Simulation and Learning. (n.d.). Resources: Repository of instruments used in simulation research. https://www.inacsl.org/resources/repository-of-instruments/. [Context Link]

 

Jung D., Lee S. H., Kang S. J., Kim J. H. (2017). Development and evaluation of a clinical simulation for new graduate nurses: A multi-site pilot study. Nurse Education Today, 49, 84-89. 10.1016/j.nedt.2016.11.010. [Context Link]

 

Kaddoura M. A. (2010). New graduate nurses' perceptions of the effects of clinical simulation on their critical thinking, learning, and confidence. The Journal of Continuing Education in Nursing, 41(11), 506-516. 10.3928/00220124-20100701-02. [Context Link]

 

Kim J. H., Hur M. H., Kim H. Y. (2018). The efficacy of simulation-based and peer-learning handover training for new graduate nurses. Nurse Education Today, 69, 14-19. https://doi.org/10.1016/j.nedt.2018.06.023. [Context Link]

 

Kim J., Neilipovitz D., Cardinal P., Chiu M. (2009). A comparison of global rating scale and checklist scores in the validation of an evaluation tool to assess performance in the resuscitation of critically ill patients during simulated emergencies. Simulation in Healthcare, 4(1), 6-16. doi: 10.1097/SIH.0b013e3181880472.

 

Lawrence K., Hilfinger D. K., Cason M. L. (2018). The influence of simulation experiences on new nurses' clinical judgement. Clinical Simulation in Nursing, 25, 22-27. 10.1016/j.ecns.2018.10.008. [Context Link]

 

Lioce L. (Ed.). (2020). Healthcare simulation dictionary (2nd ed., AHRQ Publication No. 20-0019). Agency for Healthcare Research and Quality. https://www.ssih.org/dictionary. [Context Link]

 

Maneval R., Fowler K. A., Kays J. A., Boyd T. M., Shuey J., Harne-Britner S., Mastrine C. (2012). The effect of high-fidelity patient simulation on the critical thinking and clinical decision-making skills of new graduate nurses. The Journal of Continuing Education in Nursing, 43(3), 125-134. 10.3928/00220124-20111101-02. [Context Link]

 

Mowry M. J., Crump M. D. (2013). Immersion scenarios bridge the education-practice gap for new graduate registered nurses. The Journal of Continuing Education in Nursing, 44(7), 319-325. 10.3928/00220124-20130515-67. [Context Link]

 

Murphy L. J., Janisse L. (2017). Optimizing transition to practice through orientation: A quality improvement initiative. Clinical Simulation in Nursing, 13(11), 583-590. 10.1016/j.ecns.2017.07.007. [Context Link]

 

National Council of State Boards of Nursing. (2008). Regulatory model for transition to practice report. https://www.ncsbn.org/Final_08_reg_model.pdf. [Context Link]

 

Pogue D. T., O'Keefe M. (2021). The effect of simulation-enhanced orientation on graduate nurses: An integrative review. The Journal of Continuing Education in Nursing, 52(3), 150-156. 10.3928/00220124-20210216-10. [Context Link]

 

Rhodes C. A., Grimm D., Kerber K., Bradas C., Halliday B., McClendon S., Medas J., Noeller T. P., McNett M. (2016). Evaluation of nurse-specific and multidisciplinary simulation for nurse residency programs. Clinical Simulation in Nursing, 12(7), 243-250. 10.1016/j.ecns.2016.02.010. [Context Link]

 

Roche J., Schoen D., Kruzel A. (2013). Human patient simulation versus written case studies for new graduate nurses in nursing orientation: A pilot study. Clinical Simulation in Nursing, 9(6), e199-e205. 10.1016/j.ecns.2012.01.004. [Context Link]

 

Rossler K. L., Hardin K., Hernandez-Leveille M., Wright K. (2018). Newly licensed nurses' perceptions on transitioning into hospital practice with simulation-based education. Nurse Education in Practice, 33, 154-158. . [Context Link]

 

Spector N., Blegen M. A., Silvestre J., Barnsteiner J., Lynn M. R., Ulrich B., Fogg L., Alexander M. (2015). Transition to practice study in hospital settings. Journal of Nursing Regulation, 5(4), 24-38. 10.1016/S2155-8256(15)30031-4. [Context Link]

 

The Joint Commission. (2002). Health care at the crossroads: Strategies for addressing the evolving nursing crisis. https://www.jointcommission.org/-/media/deprecated-unorganized/imported-assets/t. [Context Link]

 

Thomas C. M., Mraz M. A. (2017). Exploration into how simulation can effect new graduate transition. Clinical Simulation in Nursing, 13(10), 465-470. 10.1016/j.ecns.2017.05.013. [Context Link]

 

Warren J. I., Perkins S., Greene M. A. (2018). Advancing new nurse graduate education through implementation of statewide, standardized nurse residency programs. Journal of Nursing Regulation, 8(4), 14-21. 10.1016/S2155-8256(17)30177-1. [Context Link]