Article Content

One of the greatest challenges for healthcare providers at the beginning of the 21st century is how to ensure that patient care is effective and efficient at a time when resources are constrained, population and workforce demographics are changing, and users have ever-increasing expectations of positive healthcare outcomes. In parallel, the evidence base for practice has grown massively in recent years, with approximately 10,000 new randomized controlled trials included in MEDLINE every year and 350,000 trials identified by the Cochrane Collaboration. Despite this, and as Wallin and colleagues indicate, studies have also shown that patients continue to receive treatments that are unnecessary or are potentially harmful. Against this context there has been a drive to constantly improve the quality and availability of evidence to support the implementation, cessation, or continuation of practices introduced to improve health. Clinical guidelines are viewed as an important clinical tool in the quest to promote evidence-based practice.

 

As Wallin and colleagues describe, the development of clinical guidelines both nationally and locally has increased dramatically recently. Considerable investment has been made by national, professional, and regional bodies in the development of guidelines in the hope that they will go some way to improving patient outcomes. Politically, however, less attention has been paid to guideline implementation. Sheldon et al1 in a study evaluating the use of national clinical guidelines in the United Kingdom's National Health Service found that the uptake of such guidance was highly variable. The commentator speculates and suggests that similar evaluations in other developed countries would reveal comparable findings. So why do we have all this evidence-based information, yet its uptake remains patchy?

 

Until recently, the common wisdom was that if you produce evidence-based products, such as clinical guidelines, and disseminate these to the relevant people, they would automatically be used in practice and decision making. Arguably, it is naove to suggest that because guidelines exist, their implementation automatically follows. However, traditional and early models of research utilization did have unacknowledged assumptions of linearity and rationality. More recently, there has been a slow shift to recognize that the process of implementing evidence in practice is more complex and is similar to a "contact sport" necessitating the challenge, negotiation, and overcoming of various boundaries, objects, and players.2,3 Getting evidence into practice requires attention to the nature of evidence, contextual factors, people, and processes. The PARIHS framework that Wallin and colleagues use to good effect in their article was developed to counterbalance linear approaches to evidence implementation by acknowledging the interplay and interdependency of many factors. It is proposed within the framework that successful implementation depends on the nature of the evidence being used, the quality of context, and the type of facilitation required to enable a successful change process.4-6

 

In this article, Wallin et al raise some key points about evidence, context, and facilitation in relation to guideline implementation; these include that research evidence is valued differently by different people. This means that if you take an evidence synthesis, such as a clinical guideline, it cannot automatically be assumed that the recommendations will mean the same thing to individuals and groups. Therefore, guideline implementation strategies will need to incorporate sharing views about "the evidence," possible negotiation, and local consensus building to make it relevant and applicable to the patient, the individual's practice and decision making, and the practice context. Second, the practice context must be receptive to new ideas and practice recommendations. There is increasing awareness evident in the literature that there are several factors that may make a context more conducive to change, some of which are described by the authors here. The challenge remaining, however, is to create these types of organizations-a particularly big and onerous undertaking with the other political and practical constraints healthcare organizations and practitioners operate within. Finally, Wallin and colleagues stress the importance of a flexible approach to facilitation. The role of a dedicated project lead is critical to the success of implementation in numerous different evidence-into-practice projects.7-9 Facilitators have the potential to work with individuals and teams to articulate issues to do with the guideline, how it applies to their practice, enable the development and implementation of strategies that acknowledge and incorporate these factors, and work on contextual issues. Importantly, as the authors stress, these people may already be part of the organization in clinical nurse specialist and clinical nurse educator roles for example.

 

As Wallin et al state, there is no magic bullet (or for that matter magic target) to achieve the successful implementation of guidelines into practice. Critically, this article highlights that guideline implementation is not easy. It requires good planning, skill, and experience; a good understanding of change management; and the support and engagement of people at many levels of the organization. The reality of the clinical context is messy and complex; therefore, any guideline implementation strategy must be able to deal with this complexity. The evidence base about implementation is still developing, and many ideas and strategies require further testing. The authors, however, have usefully framed some of the issues that require attention, and, as, such this article will be a helpful starting point for those in the business of trying to make guideline implementation a reality.

 

ACKNOWLEDGMENTS

The authors' work is funded by the Alberta Heritage Foundation for Medical Research (AHFMR), the Canadian Institute for Health Research (CIHR), the Canadian Health Services Research Foundation (CHSRF), and the Centre for Knowledge Transfer.

 

References

 

1. Sheldon TA, Cullum N, Dawson D, et al. What's the evidence that NICE guidance has been implemented? Results from a national evaluation using time series analysis, audit of patients' notes, and interviews. Br Med J.2004; 329:999. [Context Link]

 

2. Rycroft-Malone J. Getting evidence into practice: a "contact sport" [editorial]. Worldviews Evidence-Based Nurs. 2005;2(1):1-3. [Context Link]

 

3. Greenhalgh T, Robert G, Bate P. How to Spread Good Ideas. A Systematic Review of the Literature on Diffusion, Dissemination and Sustainability of Innovations in Health Service Delivery and Organisation. London: National Coordinating Centre for NHS Service Delivery and Organisation. Retrieved November 1, 2004, from http://www.sdo.lshtm.ac.uk. [Context Link]

 

4. Rycroft-Malone J, Seers K, Titchen A, et al. What counts as evidence in evidence-based practice? J Adv Nurs. 2004;47:81-90. [Context Link]

 

5. McCormack B, Kitson A, Harvey G, et al. Getting evidence into practice: the meaning of 'context'. J Adv Nurs 2002;38:94-104. [Context Link]

 

6. Harvey G, Loftus-Hills A, Rycroft-Malone J, et al. Getting evidence into practice: the role and function of facilitation. J Adv Nurs. 2002;37:577-588. [Context Link]

 

7. Redfern S, Christian S, Murrells T, Norman I. Evaluation of Change in Practice: South Thames Evidence-based Practice Project (STEP). London: Kings College, University of London; 2000. [Context Link]

 

8. Dopson S, Gabbay J, Locock L, Chambers D. Evaluation of the PACE Programme: Final Report. Oxford: Oxford Healthcare Management Institute & Wessex Institute for Health Services Management; 1999. [Context Link]

 

9. Dopson S, FitzGerald L, Ferlie E, Gabbay J, Locock L. No magic targets!! Changing clinical practice to become more evidence based. Health Care Manage Rev. 2002;27, 35-47. [Context Link]