Keywords

evaluation, public health infrastructure, training, workforce

 

Authors

  1. Potter, Margaret A.
  2. Ley, Christine E.
  3. Fertman, Carl I.
  4. Eggleston, Molly M.
  5. Duman, Senol

Abstract

Evaluating workforce development for public health is a high priority for federal funders, public health agencies, trainees, trainers, and academic researchers. But each of these stakeholders has a different set of interests. Thus, the evolving science of training evaluation in the public health sector is being pulled simultaneously in a number of different directions, each emphasizing different methods, indicators, data-collection instruments, and reporting priorities. We pilot-tested the evaluation of a 30-hour, competency-based training course in a large urban health department. The evaluation processes included strategic, baseline assessment of organizational capacity by the agency; demographic data on trainees as required by the funder; a pre- and posttraining inventory of beliefs and attitudes followed by a posttraining trainee satisfaction survey as required by the trainers and the agency; and a 9-month posttraining follow-up survey and discussion of learning usefulness and organizational impact as desired by the academic researchers and the trainers. Routinely requiring all of these processes in training programs would be overly burdensome, time-consuming, and expensive. This pilot experience offers some important practical lessons for training evaluations in the future.

 

Workforce development programs for public health require evaluation to demonstrate their impact. The federal agencies that invest funds into training and preparedness centers are accountable to Congress for their programs' outcomes. 1,2 The workers and their agencies who invest time into training programs want to know if performance capacities are improving. 3 The trainers who develop and deliver these programs need to know what curricula, methods, and materials meet the assessed needs of their audiences and which do not. Researchers want to advance the relevant academic fields including evaluation science, adult education, organizational systems, and public health practice, among others. 4 These groups vary as to the primary focus of training evaluations: whereas, learners' satisfaction may be of greater concern to trainers and trainees, implementation of skills and impact on organizational performance may hold greater interest for managers and funders, as well as for trainers.

 

Training program evaluation is an evolving science at best. In the corporate sector, the ultimate goal of training is clear: to enhance productivity and profitability. Even so, training evaluations often stop short of measuring outcomes at the organizational level. Consider that, according to data from the American Society for Training and Development, 95 percent of trainers assess their learners' satisfaction, but less than 20 percent of trainers evaluate learners' use of the knowledge, and only 9 percent of trainers assess the organizational impact of the training. 5

 

In the public sector, especially as applied to public health workforce, the problems and complexities of evaluation are uniquely challenging. Here, profitability per se is not the stated overarching goal; rather, effective service defines success. In public health, the "10 essential services" define performance capacity, 6 and the related workforce "competencies" provide a framework for training. 7 Here, the accountable public agency focuses on the considerations of cost effectiveness, cost efficiency, cost utility, and sustainability. Public health academicians and practitioners over the past decade have developed needs-assessment tools 8,9 and conceptualized training within a systems organizational framework. 10-12 However, given the present minimal understanding of public health systems, meaningful performance indicators for organization-level effects of training in public health remain yet to be defined. 10

 

In our ongoing model development for strategic, competency-based training in the public health agency, our training goal has been to identify and incorporate indicators of organizational performance improvement that result from individual trainee participation in essential-service competency-based courses. 13 Here, we describe one of our training programs-a strategically designed 30-hour course-along with its evaluation process. We used various evaluation approaches simultaneously to explore their processes and to pilot-test the measurement of various outcomes, which included four major data-gathering approaches (detailed in a later section), each designed primarily to serve the evaluation needs of a different interest group or groups. Reflecting on the many lessons learned from this experience, we highlight the difficulties arising from the combination of several professional and academic approaches, and we highlight some challenges for training program evaluation.