Authors

  1. Holtschneider, Mary Edel MEd, MPA, BSN, RN-BC, NREMT-P, CPLP
  2. Park, Chan W. MD, FAAEM

Article Content

How do you assess the effectiveness of your simulation or other nursing professional development (NPD) educational offerings? If you are like most of us, you are probably using Kirkpatrick's four levels for training evaluation. Dr. Kirkpatrick originally published his four levels of training evaluation model in 1959, which included the following: Level 1, Reaction; Level 2, Learning; Level 3, Behavior; and Level 4, Results/outcomes. Many NPD practitioners, simulation educators, and others in the broader training and development field primarily reference Kirkpatrick Level 1, that is, reaction evaluations, and/or Level 2, that is, knowledge gained as determined by pretests and posttests, to determine the efficacy of their training.

 

So, are we maximizing our use of Level 1 and Level 2 evaluations? Are we asking our learners and other stakeholders enough probing questions about their satisfaction with our educational sessions that we find out what is important to them, rather than just using standard evaluation forms that ask broad, nonspecific questions? If we are doing Level 2 evaluations, are we writing effective pretest and posttest questions that can pinpoint knowledge change? If we are only using Level 1 and Level 2 evaluations, what are the barriers to going further? Could we begin documenting and measuring the behavioral impact of our training and thereby reach Kirkpatrick Level 3 evaluations? What if we partnered with colleagues from other departments, namely, patient safety, quality, and those from our own education department to consider longitudinal measurement of our educational impact on learner behavior, attitude toward learning, job satisfaction, retention, and mindfulness? In essence, all of these larger organizational outcomes constitute Level 4 training evaluations.

 

In this next series of columns, we will explore answers to questions about the evaluation process itself, including those listed above. We will also address strategies on how to increase the rigor of the Level 1 and Level 2 evaluations. In addition, we will discuss how evaluations can help address the impact our educational offerings have on organizational metrics such as educational reach, process improvement, workforce engagement, recruitment, and retention.

 

By all accounts, the Level 4 training evaluation remains a challenge for many of our readers. There are good reasons for this. First, a Kirkpatrick Level 4 training evaluation typically requires a methodical longitudinal measurement and assessment of the impact of the training and how it influenced the organizational behavior, patient-centered care, and often other clinical outcome measures. Second, it is often very difficult to establish a credible link between the training session to such outcomes. In most hospitals and healthcare settings, many quality, process improvement, and patient-centered initiatives are being conducted continuously and in parallel fashion to our training. Thus, to assert that the outcome of interest was achieved by a training session alone would be difficult to argue. However, what if we as NPD practitioners were to focus on the longitudinal impact of the educational intervention on the attitudes and behaviors of a specific unit or groups of people? Isn't this an outcome worthy of a Level 4 evaluation? What if the emotional health of a unit were positively impacted by the training over a course of 6 months? Wouldn't that be a Level 4 evaluation result worth celebrating? We need to see beyond the common paradigm that solely focuses on clinical outcomes or systems-based outcomes to justify a Level 4 training evaluation.

 

In the Nursing Professional Development: Scope and Standards of Practice, Third Edition (Harper & Maloney, 2016), Standard 6 on Evaluation states, "The NPD practitioner evaluates progress toward attainment of outcomes" (p. 41). Within this standard, the NPD practitioner is expected to use relevant methods to measure processes and outcomes, involve learners and other stakeholders in the evaluation process, and disseminate evaluation results. Improving our evaluation processes not only meets this standard but also invites the learners and other stakeholders to engage more fully in the process, as they can provide more meaningful input in helping enhance future simulation education sessions.

 

As we continue with this series, we invite you to e-mail us at mailto:[email protected] and mailto:[email protected] your probing questions about evaluations, along with examples of how you have creatively evaluated your offerings and perhaps have demonstrated some form of impact. How have you been able to meet Standard 6 on Evaluation in our Scope and Standards? What have you learned in this process? What advice do you have for others striving to improve their evaluation methods? We look forward to hearing from you and continuing this dialogue.

 

Reference

 

Harper M., & Maloney P. (2016). Nursing professional development: Scope and standards of practice (3rd ed.). Chicago, IL: Association for Nursing Professional Development. [Context Link]