Authors

  1. Brownson, Ross C. PhD

Article Content

The need for an enhanced focus on evidence-based decision making in public health settings is critical for several reasons. First, we need scientific information on the programs that are most likely to be effective in promoting health.1,2 An array of evidence-based interventions (EBIs) is now available from numerous sources including the Guide to Community Preventive Services.3 Second, to translate science to practice, we need to marry information on EBIs from the peer-reviewed literature with the realities of a specific real-world environment.4,5 To do so, we need to better define processes that lead to evidence-based decision making. This process of evidence-based decision making can include the so-called "administrative evidence-based practices," which are agency (health department)- and work unit-level structures and activities that are positively associated with performance measures (eg, achieving core public health functions, carrying out EBIs).6 Finally, we need a stronger body of literature on effective approaches for scaling up interventions of proven effectiveness more consistently at the state and local levels. This process often involves extending the reach of an intervention by replicating it in other localities, cities, or states (the so-called "horizontal scale-up").7

 

The article by DeGroff and colleagues,8 published recently in the Journal of Public Health Management & Practice, illustrates several of the key issues and gaps related to evidence-based decision making in cancer control. In particular, the authors examined the use of EBIs related to breast and cervical cancer early detection. By surveying grantees in the National Breast and Cervical Cancer Early Detection Program, the authors were able to document the use of 5 EBIs. They found that EBIs are being used widely with clients and providers, yet gaps remain related to evaluation and training.

 

The DeGroff and colleagues study and related literature9-12 illustrate several important implications for practitioners:

  

* While the literature is increasingly showing which EBIs are being implemented, there is continued need for attention to how EBIs are being implemented and the factors needed for scaling up EBIs, particularly for lower resource settings.

 

* Data on implementation alone are not sufficient. DeGroff and colleagues found that less than half of respondents conducted evaluation of EBIs, suggesting that greater attention on real-world evaluation is needed.13

 

* As in the DeGroff and colleagues study, nearly all data on the use of EBIs are self-reported and subject to recall bias. The medical record audit is a "gold standard" for validation in health services research.14,15 We need to identify some parallel system for validating data on EBIs in public health settings.

 

* Early data on the extent of mis-implementation (ie, ending effective programs and policies or continuing ineffective ones)9 show that while cancer control programs tend to fare better than most other programmatic areas, there is room for improvement across all public health topics.

 

REFERENCES

 

1. Black BL, Cowens-Alvarado R, Gershman S, Weir HK. Using data to motivate action: the need for high quality, an effective presentation, and an action context for decision-making. Cancer Causes Control. 2005;16(suppl 1):15-25. [Context Link]

 

2. Brownson RC, Baker EA, Leet TL, Gillespie KN, True WR. Evidence-Based Public Health. 2nd ed. New York, NY: Oxford University Press; 2011. [Context Link]

 

3. Task Force on Community Preventive Services. Guide to Community Preventive Services. http://www.thecommunityguide.org. Accessed March 6, 2016. [Context Link]

 

4. Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006;29(1):126-153. [Context Link]

 

5. Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med. 2004;27(5):417-421. [Context Link]

 

6. Brownson RC, Allen P, Duggan K, Stamatakis KA, Erwin PC. Fostering more-effective public health by identifying administrative evidence-based practices: a review of the literature. Am J Prev Med. 2012;43(3):309-319. [Context Link]

 

7. World Health Organization. Practical Guidance for Scaling Up Health Service Innovations. Geneva, Switzerland: World Health Organization; 2009. [Context Link]

 

8. DeGroff A, Carter A, Kenney K, et al. Using evidence-based interventions to improve cancer screening in the National Breast and Cervical Cancer Early Detection Program. J Public Health Manag Pract. doi:10.1097/PHH.0000000000000369. [Context Link]

 

9. Brownson RC, Allen P, Jacob RR, et al. Understanding mis-implementation in public health practice. Am J Prev Med. 2015;48(5):543-551. [Context Link]

 

10. Green LW, Ottoson JM, Garcia C, Hiatt RA. Diffusion theory, and knowledge dissemination, utilization, and integration in public health. Annu Rev Public Health. 2009;30:151-174. [Context Link]

 

11. Hannon PA, Fernandez ME, Williams RS, et al. Cancer control planners' perceptions and use of evidence-based programs. J Public Health Manag Pract. 2010;16(3):E1-E8. [Context Link]

 

12. Hannon PA, Maxwell AE, Escoffery C, et al. Colorectal Cancer Control Program grantees' use of evidence-based interventions. Am J Prev Med. 2013;45(5):644-648. [Context Link]

 

13. Newcomer K, Hatry H, Wholey J, eds. Handbook of Practical Program Evaluation. 4th ed. San Francisco, CA: Jossey-Bass; 2015.

 

14. Kurland LT, Molgaard CA. The patient record in epidemiology. Sci Am. 1981;245(4):54-63.

 

15. Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M. Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. JAMA. 2000;283(13):1715-1722.