1. Windey, Maryann PhD, MS, MSN, RN-BC


Article Content

Recently, a colleague recommended a book that I finally got around to reading. The title should have given me a hint as to the content, Training on Trial: How Workplace Learning Must Reinvent Itself to Remain Relevant (Kirkpatrick & Kirkpatrick, 2010). The first chapter begins as a training director is told he is losing six training positions in his department. If you have been a nursing professional development (NPD) practitioner long enough, you have seen how healthcare economics can negatively affect education department budgeting. I experienced the same education downsizing in 2008, when my organization hit some economic hurdles. Since then, I have learned the value of evaluation data and showing a return on investment (ROI).


During economically challenging times, departments that provide educational support and seen as nonproductive can get cut. As an NPD practitioner, it is important to ensure organizational leaders do not see the education department as nonproductive but, instead, as a valuable commodity! One way the NPD practitioner can do this is by ensuring a robust evaluation of expectations, outcomes, and ROI for nurse residencies and fellowship programs. Moreover, Standard 6 of the Nursing Professional Development: Scope & Standards of Practice, 3rd Edition (Harper & Maloney, 2016) addresses evaluation and states that the NPD practitioner must create an evaluation process, synthesize evaluation data, show program outcomes, and disseminate evaluation results.



Deciding on how to evaluate a transition program or nurse residency can be daunting. One wants the most comprehensive evaluation, but at the same time, the evaluation must also be valid. Opperman, Liebig, Bowling, Johnson, and Harper (2016) gave an insightful overview in their article on measuring ROI, which also included robust examples of how to conduct cost analysis, benefit-cost ratios, and cost-effectiveness analysis, in addition to how to calculate ROI. Opperman et al. also reviewed Kirkpatrick and Kirkpatrick's (2010) levels of evaluation, Phillips' (2003) ROI, and Paramoure's (2013) key performance metrics. In addition, Warren (2013) illustrates the CIPP Model and the Roberta Straessle Abruzzese Evaluation Model in the Core Curriculum for Nursing Professional Development, 4th Edition. The NPD practitioner can select from multiple evaluation models, or perhaps, a blended approach might be more beneficial. Once one selects an evaluation model, then outcome measures can be determined.



There are various outcome measures that can be reported to executive leadership and stakeholders that convey the value of the nurse residency or fellowship program. For a list of typical residency outcome measures, see Table 1. Selecting multiple key performance metrics will help the NPD practitioner construct the most comprehensive program evaluation. It would be prudent to select one from each level to show satisfaction, learning, application or behavior, organizational results, and ROI. It can be challenging to capture the higher lever impacts, but every effort to do so will fortify the worth of the structured program evaluation that can be reported to leadership.

Table 1 - Click to enlarge in new windowTABLE 1 Residency Outcome Measures

Two measures that I have not seen published frequently, but my organization has used, include length of orientation and tracking of nurse recoveries. We began to use a standardized critical thinking assessment tool to help us determine length of orientation in weeks, instead of using a cookie cutter approach where every medical/surgical nurse resident receives the same number of orientation weeks, whether they needed it or not. We structured a highly individualized development plan for newly licensed nurses based on their initial assessment. For example, a medical/surgical resident could receive anywhere from 9 to 14 weeks of orientation, based on their initial critical thinking assessment, or a pediatric intensive care unit resident could receive 16-24 weeks. This was an immediate cost savings that we were able to quantify. In addition, when our NPD department hired two residency development specialists (RDSs), we made several program quality improvements, and we were able to show a saving by decreasing the length of orientation time across all specialties. We were able to do this without negatively affecting competency attainment or turnover rates. We found that less time in the classroom and more time at the bedside improved our outcomes. Quantifying these savings and outcomes for key stakeholders, such as executive leadership, finance, and human resources' business partners, gave us the ability to acquire additional RDSs. There is now one RDS at each of our five acute care campuses.


Although we measure and report residency first-year turnover, we also started tracking and reporting "nurse recoveries." These are situations where the resident may be a poor fit for a particular department or nursing unit. It could be that the resident was placed on a unit with too high of an acuity level or had a poor preceptor match or there might have been issues with fitting into the culture of the unit or a nurse-to-nurse hostility issue. There are a myriad of reasons why a competent, newly licensed nurse may not be a good fit for a certain unit. The RDSs work hard to identify these issues early and take action quickly. Often, we can transfer the resident to a different unit or a different campus. The RDSs provide support and mentorship and can usually turn the situation around. Without the RDS intervention, these nurses most likely would have left the organization, and the cost to replace a registered nurse could be as high as $88,000 (Kovner, Brewer, Fatechi, & Jun, 2014). Tracking, attaching a dollar amount, and reporting these specific RDS activities further support the need for RDSs for the residency program.



It is vital to select a program evaluation model that fits your residency needs and take special care to select the highest level program outcome measures. NPD practitioner must determine a comprehensive program evaluation plan and disseminate and communicate results often. It is about capturing, measuring, and tracking outcomes that will support the value of the residency or transition program. I think Kirkpatrick said it best himself, "Trainers face a corporate jury that may or may not formally convene to decide the value of their contribution to the organization versus the expense of their operations" (Kirkpatrick & Kirkpatrick, 2010, p. 5). Our transition programs, fellowships, and residencies are on trial every day, so we need to make sure the juries (leadership, finance, and human resources' partners) have the evidence required to support the program.




Harrison D., Ledbetter C. (2014). Nurse residency programs: Outcome comparisons to best practices. Journal for Nurses in Professional Development, 30(2), 76-82.


Harper M. G., Maloney P. (2016). Nursing professional development: Scope & standards of practice (3rd ed.). Chicago, IL: Association for Nursing Professional Development. [Context Link]


Kirkpatrick J., Kirkpatrick W. K. (2010). Training on trial: How workplace learning must reinvent itself to remain relevant. New York, NY: AMACON. [Context Link]


Kovner C. T., Brewer C. S., Fatechi F., Jun J. (2014). What does nurse turnover rate mean and what is the rate? Policy, Politics & Nursing Practice, 15(3-4), 64-71. [Context Link]


Meyer Bratt M. (2013). Nurse residency program: Best practices for optimizing organizational success. Journal for Nurses in Professional Development, 29(3), 102-110.


Oja K. J. (2013). Financial, performance, and organizational outcomes of a nurse extern program. Journal for Nurses in Professional Development, 29(6), 290-293.


Opperman C., Liebig D., Bowling J., Johnson C. S., Harper M. (2016). Measuring return on investment for professional development activities: Implications for practice. Journal for Nursing in Professional Development, 32(4), 176-184. [Context Link]


Paramoure L. (2013). ROI by design: Unlock training's impact through measurable instructional design. Lexington, KY: Author. [Context Link]


Phillips J. (2003). Return on investment in training and performance improvement programs (2nd ed.). Burlington, MA: Butterworth-Heinemann. [Context Link]


Warren J. I. (2013). Program evaluation and return on investment. In Bruce S. L. (Ed.). Core curriculum for nursing professional development (4th ed., pp. 547-568). Chicago, IL: Association for Nursing Professional Development. [Context Link]