Evidence-Based Practice Step by Step

Series Editors

Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP, FNAP, FAAN
Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN

Publisher: Anne Dabrow Woods, MSN, RN, CRNP, ANP-BC
Editor-in-Chief: Maureen Shawn Kennedy, MA, RN
Clinical Managing Editor: Karen Roush, MS, RN, FNP-C
Senior Editors: Sylvia Foley, Jacob Molyneux
Editor: Amy M. Collins
Associate Editor: Alison Bulman
Senior Editorial Coordinator: Michael Fergenson
Evidence-Based Practice, Step by Step series editors: Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP, FNAP, FAAN, and Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN
Production Director: Leslie Caruso
Managing Editor, Production: Erika Fedell
Creative Director: Larry Pezzato

Copyright © 2012 Lippincott Williams & Wilkins, a Wolters Kluwer business.

Two Commerce Square
2001 Market Street
Philadelphia, PA 19103

ISBN 978-1-4698-1328-8

All rights reserved. This book is protected by copyright. No part of this book may be reproduced or transmitted in any form or by any means, including as photocopies or scanned-in or other electronic copies, or utilized by any information storage and retrieval system without written permission from the copyright owner, except for brief quotations embodied in critical articles and reviews. Materials appearing in this book prepared by individuals as part of their official duties as U.S. government employees are not covered by the above mentioned copyright. To request permission, please contact Lippincott Williams & Wilkins at Two Commerce Square, 2001 Market Street, Philadelphia PA 19103, via e-mail at [email protected], or via website at lww.com (products and services).

DISCLAIMER

Care has been taken to confirm the accuracy of the information present and to describe generally accepted practices. However, the authors, editors, and publisher are not responsible for errors or omissions or for any consequences from application of the information in this book and make no warranty, expressed or implied, with respect to the currency, completeness, or accuracy of the contents of the publication. Application of this information in a particular situation remains the professional responsibility of the practitioner.

To purchase additional copies of this book, please visit Lippincott's NursingCenter.com or call our customer service department at (800) 638-3030 or fax orders to (301) 223-2320. International customers should call (301) 223-2300.

Visit Lippincott Williams & Wilkins on the Internet: http://www.lww.com. Lippincott Williams & Wilkins customer service representatives are available from 8:30 am to 6:00 pm, EST.

CONTENTS

Part I: Developing and Searching the Clinical Question

  1 Igniting a Spirit of Inquiry: An Essential Foundation for Evidence-Based Practice

  2 The Seven Steps of Evidence-Based Practice

  3 Asking the Clinical Question: A Key Step in Evidence-Based Practice

  4 Searching for the Evidence

Part II: Critically Appraising the Evidence

  5 Critical Appraisal of the Evidence: Part I

  6 Critical Appraisal of the Evidence: Part II

  7 Critical Appraisal of the Evidence: Part III

Part III: Implementing the Evidence

  8 Following the Evidence: Planning for Sustainable Change

  9 Implementing an Evidence-Based Practice Change

10 Rolling Out the Rapid Response Team

Part IV: Disseminating the Evidence and Sustaining the Change

11 Evaluating and Disseminating the Impact of an Evidence-Based Intervention: Show and Tell

12 Sustaining Evidence-Based Practice Through Organizational Policies and an Innovative Model

Mission Statement: The leading voice for the nursing profession since 1900, AJN promotes excellence in nursing and health care through the dissemination of ­evidence-based, peer-reviewed clinical information and original research, discussion of relevant and controversial professional issues, adherence to standards of journalistic integrity and excellence, and promotion of nursing perspectives to the health care community and the public.

Part I: Developing and Searching the Clinical Question

By Bernadette Mazurek Melnyk, PhD,
RN, CPNP/PMHNP, FNAP, FAAN,
Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN

Igniting a Spirit of Inquiry: An Essential Foundation for Evidence-Based Practice

How nurses can build the knowledge and skills they need to ­implement EBP.

This is the first article in a new series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organiza­tional culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this new series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

Do you ever wonder why nurses engage in practices that aren't supported by evidence, while not implementing practices substantiated by a lot of evidence? In the past, nurses changed hospitalized patients' IV dressings daily, even though no solid evidence supported this practice. When clinical trials finally explored how often to change IV dressings, results indicated that daily changes led to higher rates of phlebitis than did less frequent changes.1 In many hospital EDs across the country, children with asthma are treated with albuterol delivered with a nebulizer, even though substantial evidence shows that when albuterol is delivered with a metered-dose inhaler plus a spacer, children spend less time in the ED and have fewer adverse effects.2 Nurses even disrupt patients' sleep, which is important for restorative healing, to document blood pressure and pulse rate because it's hos­pital policy to take vital signs every two or four hours, even though no evi­dence supports that doing so im­proves the identification of po­tential complications. In fact, clini­cians often follow outdated policies and procedures without questioning their current relevance or accuracy, or the evidence for them.

When a spirit of inquiry—an ongoing curiosity about the best evidence to guide clinical decision making—and a culture that supports it are lacking, clinicians are unlikely to embrace evidence-based practice (EBP). Every day, nurses across the care continuum perform a multitude of interventions (for example, administering medication, positioning, suctioning) that should stimulate questions about the evidence supporting their use. When a nurse possesses a spirit of inquiry within a supportive EBP culture, she or he can routinely ask questions about clinical practice while care is being delivered. For example, in patients with endotracheal tubes, how does use of saline with suctioning compared with suctioning without saline affect oxygen saturation? In patients with head injury, how does elevating the head of the bed compared with keeping a patient in a supine position affect intracranial pressure? In postoperative surgical patients, how does the use of music compared with no use of music af­fect the frequency of pain medication administration?

The Institute of Medicine has set a goal that by 2020, 90% of all health care decisions in the United States will be evidence based,3 but the majority of nurses are still not consistently implementing EBP in their clinical settings.4 To foster outcomes-driven health care in which decisions are based on evidence, providers and health care systems need a comprehensive approach to ensure that their results are measured.5 Without EBP, patients don't receive the highest quality of care, health outcomes are seriously jeopar­dized, and health care costs soar.6 Findings from recent studies also indicate that when nurses and other health care providers engage in EBP, they experience greater autonomy in their practices and a higher level of job satisfaction.7 At a time when this country is facing the most serious nursing shortage in its history, empowering nurses to routinely engage in EBP may lead to less turnover and lower vacancy rates, in addition to im­proving the quality of health care and patient outcomes.

To accelerate the use of EBP by nurses and other health care providers, some insurers have instituted pay-for-performance programs that offer clinicians incentives to follow evidence-based guidelines. And Medicare no longer reimburses hospitals for treating preventable hospital-acquired injuries or infections (such as falls, pressure ulcers, or ventilator-associated pneumonia). Although these measures should improve the overall quality of care in our hospitals, it's well known that extrinsic motivators are typically not more successful in facilitating a change in behavior than intrinsic motivators. Therefore, for EBP to accelerate and thrive in the U.S. health care system, nurses must have

• a never-ending spirit of inquiry and consistently question current clinical practices.

• strong beliefs in the value of EBP.

• knowledge of and skills in EBP along with the confidence to use it.

• a commitment to deliver the highest quality evidence-based care to patients and their families.

In addition, health care institutions must sustain a culture that embraces EBP, including providing clinicians the support and tools they need to engage in evidence-based care.

EBP is a problem-solving ap­proach to the delivery of health care that integrates the best evidence from well-designed studies and patient care data, and combines it with patient preferences and values and nurse expertise.8, 9 However, there's no magic formula for what percentage of a clinical decision should be based on evidence or patient preferences or nurse expertise. The weight given to each of these three EBP components varies according to the clinical situation. For example, evidence-based guidelines might indicate that a young child with an ear infection receive amox­icillin and clavulanate (Augmentin) if the infection hasn't resolved with amoxicillin. However, if the child dislikes the taste and it's likely that the medication won't be taken, patient preference should outweigh the best practice guideline and an alternative antibiotic should be prescribed.

9781469813271_1FF1_FIG.jpg

Figure 1. The EBP Paradigm: the merging of science and art. EBP within a context of caring and an EBP culture results in the highest quality of health care and patient outcomes. © Melnyk and Fineout-Overholt, 2003.

Although EBP may be re­ferred to as evidence-based medi­cine, evidence-based nursing, or evidence-based physical therapy within various disciplines, we advocate referring to all of these as evidence-based practice, in order to stimulate transdisciplinary evidence-based care and avoid the specialized terminology that can isolate the various health professions.

When nurses implement EBP within a context of caring and a supportive organizational culture, the highest quality of care is delivered and the best patient, provider, and system outcomes are achieved (see Figure 1).10 Despite outcomes being substantially bet­ter when patients receive evidence-based care, nurses and other health care providers often cite barriers that prevent its deliv­ery, including10, 11

• inadequate EBP knowledge and skills.

• a lack of EBP mentors to work with providers at the point of care.

• inadequate resources and ­support from higher administration.

• insufficient time, especially when there are demanding patient caseloads and staffing shortages.

Conversely, a number of factors facilitate the implementation of EBP, including8, 12, 13

Questions that Spark a Spirit of Inquiry

Who can I seek out to assist me in enhancing my evidence-based practice (EBP) knowledge and skills and serve as my EBP mentor?

Which of my practices are currently evidence based and which don't have any evidence to support them?

When is the best time to question my current clinical practices and with whom?

Where can I find the best evidence to answer my clinical questions?

Why am I doing what I do with my patients?

How can I become more skilled in EBP and mentor others to implement ­evidence-based care?

• EBP knowledge and skills.

• belief in the value of EBP and the ability to implement it.

• a culture that supports EBP and provides the necessary tools to sustain evidence-based care (for example, access to computer databases at the point of care and time to search for evidence).

• EBP mentors (advanced practice clinicians with expertise in EBP and organizational and individual behavior-change strategies) who work directly with clinicians at the point of care in implementing EBP.

Once nurses gain EBP knowledge and skills, they realize it's not only feasible within the context of their practice setting, but that it reignites their passion for their roles and assists them in delivering a higher quality of care with improved patient outcomes. We use the term Step Zero to refer to the continual cultivation of a spirit of inquiry as an essential foundation for EBP, and we recommend the routine use of a standard set of questions in practice (see Questions that Spark a Spirit of Inquiry) and the use of the strategies in Strategies for Building a Spirit of Inquiry.

Remember, EBP starts with a spirit of inquiry (Step Zero). As you embark on this wonderful journey to promote the highest quality of care and the best outcomes for your patients, reflect upon Step Zero, the EBP paradigm, and how you practice care. The Case Scenario for EBP: Rapid Response Teams will provide a context for learning EBP throughout the next several columns. We'll use this case in each column to focus on successive steps of the EBP process. In the meantime, we encourage you to answer the Questions that Spark a Spirit of Inquiry and implement two Strategies for Build­ing a Spirit of Inquiry in order to start your own EBP journey and begin build­ing a spirit of inquiry with your colleagues at work.

Strategies for Building a Spirit of Inquiry

Write “WHY?” on a poster and place it in the staff lounge or restroom to inspire questions from nurses about why they're engaging in certain practices with their patients. Gather the responses in an answer box. After one month, take the re­sponses and arrange them according to common themes. Address the themes in a staff meeting.

Review and answer the Questions that Spark a Spirit of Inquiry.

Create a poster with these questions and post them where your colleagues will see them. Think about these clinical questions when caring for your patients.

Case Scenario for EBP: Rapid Response Teams

You're a staff nurse on a busy medical–surgical unit. Over the past three months, you've noticed that the patients on your unit seem to have a higher acuity level than usual, with at least three cardiac arrests per month, and of those patients who arrested, four died. Today you saw a report about a recently published study in Critical Care Medicine on the use of rapid response teams to decrease rates of in-hospital cardiac arrests and unplanned ICU admissions. The study found a significant decrease in both outcomes after implementation of a rapid response team led by physician assistants with specialized skills.14 You're so impressed with these findings that you bring the report to your nurse manager, believing that a rapid response team would be a great idea for your hospital. The nurse manager is excited that you've come to her with these findings and encourages you to search for more evidence to support this practice and for research on whether rapid response teams are valid and reliable.

Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing at Arizona State University in Phoenix, where Ellen Fineout-Overholt is clinical professor and director of the Center for the Advancement of Evidence-Based Practice, Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Bernadette Mazurek Melnyk, [email protected].

REFERENCES

  1. Gantz NM, et al. Effects of dressing type and change interval on intravenous therapy complication rates. Diagn Microbiol Infect Dis 1984;2(4):325-32.

  2. Cates CJ, et al. Holding chambers (spacers) versus nebulisers for beta-agonist treatment of acute asthma. Cochrane Database Syst Rev 2006(2):CD000052.

  3. Olsen L, et al. The learning healthcare system: workshop summary. Washington, DC: National Academies Press; 2007. http://www.nap.edu/catalog.php?record_id=11903.

  4. Pravikoff DS, et al. Evidence-based practice readiness study supported by academy nursing informatics expert panel. Nurs Outlook 2005;53(1):49-50.

  5. Piper K. Results-driven health care: the five steps to higher quality, lower costs. Washington, DC: Health Results Group LLC; 2008.

  6. Health Research Institute, PricewaterhouseCoopers. What works: healing the healthcare staffing shortage. Dallas: PricewaterhouseCoopers; 2007. http://www.pwc.com/us/en/healthcare/publications/what-works-healing-the-healthcare-staffing-shortage.jhtml.

  7. Maljanian R, et al. Evidence-based nursing practice, Part 2: building skills through research roundtables. J Nurs Adm 2002;32(2):85-90.

  8. Melnyk BM, et al. The evidence-based practice beliefs and implementation scales: psychometric properties of two new instruments. Worldviews Evid Based Nurs 2008;5(4):208-16.

  9. Sackett DL, et al. Evidence-based medicine: how to practice and teach EBM. 2nd ed. Edinburgh; New York: Churchill Livingstone; 2000.

10. Melnyk BM, Fineout-Overholt E. ­Evidence-based practice in nursing and healthcare: a guide to best practice. Philadelphia: Lippincott Williams and Wilkins; 2005.

11. Melnyk BM. Strategies for overcoming barriers in implementing evidence-based practice. Pediatr Nurs 2002;28(2):159-61.

12. French B. Contextual factors influencing research use in nursing. Worldviews Evid Based Nurs 2005;2(4):172-83.

13. Melnyk BM. The evidence-based practice mentor: a promising strategy for implementing and sustaining EBP in healthcare systems. Worldviews Evid Based Nurs 2007;4(3):123-5.

14. Dacey MJ, et al. The effect of a rapid response team on major clinical outcome measures in a community hospital. Crit Care Med 2007;35(9):2076-82.

By Bernadette Mazurek Melnyk, PhD,
RN, CPNP/PMHNP, FNAP, FAAN,
Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN

The Seven Steps of Evidence-Based Practice

Following this progressive, sequential approach will lead to improved health care and patient outcomes.

This is the second article in a new series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

Research studies show that evidence-based practice (EBP) leads to higher quality care, improved patient outcomes, reduced costs, and greater nurse satisfaction than traditional approaches to care.1-5 Despite these favorable findings, many nurses remain inconsistent in their implementation of evidence-based care. Moreover, some nurses, whose education predates the inclusion of EBP in the nursing cur­riculum, still lack the computer and Internet search skills necessary to implement these practices. As a result, misconceptions about EBP—that it's too difficult or too time-consuming—continue to flourish.

In the first article in this series (“Igniting a Spirit of Inquiry: An Essential Foundation for Evidence-Based Practice,” November 2009), we described EBP as a problem-solving approach to the delivery of health care that integrates the best evidence from well-designed stud­ies and patient care data, and com­bines it with patient preferences and values and nurse exper­tise. We also ad­dressed the contribution of EBP to im­proved care and patient out­comes, de­scribed barriers to EBP as well as factors facilitating its im­plementa­tion, and discussed strategies for igniting a spirit of inquiry in clinical practice, which is the founda­tion of EBP, referred to as Step Zero. (Editor's note: although EBP has seven steps, they are num­bered zero to six.) In this article, we offer a brief overview of the multistep EBP process. Future articles will elaborate on each of the EBP steps, using the context provided by the Case Scenario for EBP: Rapid Response Teams.

Step Zero: Cultivate a spirit of inquiry. If you've been following this series, you may have already started asking the kinds of questions that lay the groundwork for EBP, for example: in patients with head injuries, how does supine positioning compared with elevating the head of the bed 30 degrees affect intracranial pressure? Or, in patients with supraventricular tachycardia, how does administering the β-blocker metoprolol (Lopressor, Toprol-XL) compared with ad­ministering no medicine affect the frequency of tachycardic episodes? Without this spirit of inquiry, the next steps in the EBP process are not likely to happen.

Case Scenario for EBP: Rapid Response Teams

You're a staff nurse on a busy medical–surgical unit. Over the past three months, you've noticed that the patients on your unit seem to have a higher acuity level than usual, with at least three cardiac arrests per month, and of those patients who arrested, four died. Today, you saw a report about a recently published study in Critical Care Medicine on the use of rapid response teams to decrease rates of in-hospital cardiac arrests and unplanned ICU admissions. The study found a significant decrease in both outcomes after implementation of a rapid re­sponse team led by physician assistants with specialized skills.9 You're so impressed with these findings that you bring the report to your nurse manager, believing that a rapid response team would be a great idea for your hospital. The nurse manager is excited that you have come to her with these findings and encourages you to search for more evidence to support this practice and for research on whether rapid response teams are valid and reliable.

Step 1: Ask clinical questions in PICOT format. Inquiries in this format take into account pa­tient population of interest (P), intervention or area of interest (I), comparison intervention or group (C), outcome (O), and time (T). The PICOT format provides an efficient framework for searching electronic databases, one designed to retrieve only those articles relevant to the clinical question. Using the case scenario on rapid response teams as an example, the way to frame a question about whether use of such teams would result in positive outcomes would be: “In acute care hospitals (patient population), how does having a rapid response team (intervention) com­pared with not having a response team (comparison) affect the num­ber of cardiac arrests (out­come) during a three-month pe­riod (time)?”

Step 2: Search for the best evidence. The search for evidence to inform clinical practice is tre­mendously streamlined when questions are asked in PICOT format. If the nurse in the rapid response scenario had simply typed “What is the impact of having a rapid response team?” into the search field of the database, the result would have been hundreds of abstracts, most of them irrelevant. Using the PICOT format helps to identify key words or phrases that, when entered suc­cessively and then combined, expedite the location of rele­vant articles in massive research databases such as MED­LINE or CINAHL. For the PICOT ques­tion on rapid response teams, the first key phrase to be entered into the database would be acute care hospitals, a common subject that will most likely result in thou­sands of citations and abstracts. The second term to be searched would be rapid response team, followed by cardiac arrests and the remaining terms in the PICOT question. The last step of the search is to combine the results of the searches for each of the terms. This method narrows the results to articles per­ti­nent to the clinical question, of­ten resulting in fewer than 20. It also helps to set limits on the final search, such as “hu­man subjects” or “English,” to eliminate animal studies or articles in foreign languages.

Step 3: Critically appraise the evidence. Once articles are selected for review, they must be rapidly appraised to determine which are most relevant, valid, reliable, and applicable to the clin­ical question. These studies are the “keeper studies.” One reason clinicians worry that they don't have time to implement EBP is that many have been taught a laborious critiquing process, in­cluding the use of numerous ques­tions de­signed to reveal every element of a study. Rapid critical appraisal uses three important ques­tions to evaluate a study's worth.6-8

Are the results of the study valid? This question of study validity centers on whether the research methods are rigorous enough to render findings as close to the truth as possible. For example, did the re­searchers randomly assign subjects to treat­ment or control groups and ensure that they shared key characteristics prior to treatment? Were valid and reliable ­instruments used to measure key outcomes?

What are the results and are they important? For intervention studies, this question of study reliability ad­dresses wheth­er the intervention worked, its impact on outcomes, and the likelihood of obtaining similar results in the clinicians' own practice settings. For qualitative studies, this includes assessing whether the research approach fits the purpose of the study, along with evaluating other as­pects of the research such as wheth­er the results can be con­firmed.

Will the results help me care for my patients? This question of study applicability covers clinical considerations such as whether subjects in the study are similar to one's own pa­tients, whether benefits outweigh risks, feasibility and cost-effectiveness, and patient values and preferences.

After appraising each study, the next step is to synthesize the stud­ies to determine if they come to similar conclusions, thus support­ing an EBP decision or change.

Step 4: Integrate the evidence with clinical expertise and patient preferences and values. Research evidence alone is not suf­ficient to justify a change in practice. Clinical expertise, based on patient assessments, laborato­ry data, and data from outcomes man­agement programs, as well as patients' preferences and values are important components of EBP. There is no magic formula for how to weigh each of these elements; implementation of EBP is highly influenced by institution­al and clinical variables. For ex­ample, say there's a strong body of evidence showing re­duced in­cidence of depression in burn pa­tients if they receive eight sessions of cognitive-behavioral therapy prior to hospital discharge. You want your pa­tients to have this therapy and so do they. But budg­et constraints at your hospital prevent hiring a therapist to offer the treatment. This resource def­icit hinders im­plemen­tation of EBP.

Step 5: Evaluate the outcomes of the practice decisions or changes based on evidence. After implementing EBP, it's important to monitor and evaluate any changes in outcomes so that positive effects can be supported and negative ones remedied. Just because an intervention was ef­fective in a rigorously controlled trial doesn't mean it will work exactly the same way in the clinical setting. Monitoring the effect of an EBP change on health care quality and outcomes can help clinicians spot flaws in im­plemen­tation and identify more precisely which patients are most likely to benefit. When results differ from those reported in the research ­literature, monitoring can help determine why.

Step 6: Disseminate EBP results. Clinicians can achieve won­derful outcomes for their patients through EBP, but they often fail to share their experiences with colleagues and their own or other health care organizations. This leads to need­less duplication of effort, and per­petuates clinical approaches that are not evidence based. Among ways to disseminate successful in­itiatives are EBP rounds in your institution, pres­en­tations at local, re­gional, and na­tional conferences, and reports in peer-reviewed jour­nals, profes­sional newsletters, and publica­tions for general audiences.

When health care organiza­tions adopt EBP as the standard for clinical decision making, the steps outlined in this article naturally fall into place. The next article in our series will feature a staff nurse on a medical–surgical unit who approached her hospital's EBP mentor to learn how to formulate a clinical question about rapid response teams in PICOT format.

Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing at Arizona State University in Phoenix, where Ellen Fineout-Overholt is clinical professor and director of the Center for the Advancement of Evidence-Based Practice, Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Bernadette Mazurek Melnyk, [email protected].

REFERENCES

  1. Grimshaw J, et al. Toward evidence-based quality improvement. Evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966-1998. J Gen Intern Med 2006;21 Suppl 2:S14-S20.

  2. McGinty J, Anderson G. Predictors of physician compliance with American Heart Association guidelines for acute myocardial infarction. Crit Care Nurs Q 2008;31(2):161-72.

  3. Shortell SM, et al. Improving patient care by linking evidence-based medicine and evidence-based management. JAMA 2007;298(6):673-6.

  4. Strout TD. Curiosity and reflective thinking: renewal of the spirit. Indianapolis, IN: Sigma Theta Tau International; 2005.

  5. Williams DO. Treatment delayed is treatment denied. Circulation 2004;109(15):1806-8.

  6. Giacomini MK, Cook DJ. Users' guides to the medical literature: XXIII. Qualitative research in health care A. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA 2000;284(3):357-62.

  7. Giacomini MK, Cook DJ. Users' guides to the medical literature: XXIII. Qualitative research in health care B. What are the results and how do they help me care for my patients? Evidence-Based Medicine Working Group. JAMA 2000;284(4):478-82.

  8. Stevens KR. Critically appraising quantitative evidence. In: Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice. Philadel­phia: Lippincott Williams and Wilkins; 2005.

  9. Dacey MJ, et al. The effect of a rapid response team on major clinical outcome measures in a community hospital. Crit Care Med 2007;35(9):2076-82.

By Susan B. Stillwell, DNP, RN, CNE,
Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, and Kathleen
M. Williamson, PhD, RN

Asking the Clinical Question: A Key Step in Evidence-Based Practice

A successful search strategy starts with a well-formulated question.

This is the third article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

To fully implement evidence-based practice (EBP), nurses need to have both a spirit of inquiry and a culture that supports it. In our first article in this series (“Igniting a Spirit of Inquiry: An Essential Foundation for Evidence-Based Practice,” November 2009), we defined a spirit of inquiry as “an ongoing curiosity about the best evidence to guide clinical decision making.” A spirit of inquiry is the foundation of EBP, and once nurses pos­sess it, it's easier to take the next step—to ask the clinical question.1 Formulating a clinical question in a systematic way makes it pos­sible to find an answer more quickly and efficiently, leading to improved processes and patient outcomes.

In the last installment, we gave an over­view of the multistep EBP process (“The Seven Steps of Evidence-Based Practice,” January). This month we'll discuss step one, asking the clinical question. As a context for this discussion we'll use the same scenario we used in the previous articles (see Case Sce­nario for EBP: Rapid Response Teams).

In this scenario, a staff nurse, let's call her Rebecca R., noted that patients on her medical–surgical unit had a high acuity level that may have led to an in­crease in cardiac arrests and in the number of patients transferred to the ICU. Of the patients who had a cardiac arrest, four died. Rebecca shared with her nurse manager a recently published study on how the use of a rapid response team resulted in reduced in-hospital cardiac arrests and un­planned admissions to the critical care unit.2 She believed this could be a great idea for her hospital. Based on her nurse manager's suggestion to search for more evi­dence to support the use of a rap­id response team, Rebecca's spirit of inquiry led her to take the next step in the EBP process: asking the clinical question. Let's follow Rebecca as she meets with Car­los A., one of the expert EBP men­tors from the hospital's EBP and research council, whose role is to assist point of care providers in enhancing their EBP knowledge and skills.

Case Scenario for EBP: Rapid Response Teams

You're a staff nurse on a busy medical–surgical unit. Over the past three months, you've noticed that the patients on your unit seem to have a higher acuity level than usual, with at least three cardiac arrests per month, and of those patients who arrested, four died. Today, you saw a report about a recently published study in Critical Care Medicine on the use of rapid response teams to decrease rates of in-hospital cardiac arrests and unplanned ICU admissions. The study found a significant decrease in both outcomes after implementation of a rapid response team led by physician assistants with specialized skills.2 You're so impressed with these findings that you bring the report to your nurse manager, believing that a rapid response team would be a great idea for your hospital. The nurse manager is excited that you have come to her with these findings and encourages you to search for more evidence to support this practice and for research on whether rapid re­sponse teams are valid and reliable.

Types of clinical questions. Carlos explains to Rebecca that finding evidence to improve pa­tient outcomes and support a practice change depends upon how the question is formulated. Clinical practice that's informed by evidence is based on well-formulated clinical questions that guide us to search for the most current literature.

There are two types of clinical questions: background questions and foreground questions.3-5 Fore­ground questions are specific and relevant to the clinical issue. Fore­ground questions must be asked in order to determine which of two interventions is the most ef­fective in improving patient outcomes. For example, “In adult patients undergoing surgery, how does guided imagery compared with music therapy affect anal­gesia use within the first 24 hours post-op?” is a specific, well-defined question that can only be answered by searching the current literature for studies comparing these two interventions.

Background questions are considerably broader and when answered, provide general knowl­edge. For example, a background question such as, “What therapies reduce postoperative pain?” can generally be answered by looking in a textbook. For more informa­tion on the two types of clinical questions, see Comparison of Background and Foreground Questions.4-6

Ask the question in PICOT format. Now that Rebecca has an understanding of foreground and background questions, Carlos guides her in formulating a foreground question using PICOT format.

PICOT is an acronym for the elements of the clinical question: patient population (P), intervention or issue of interest (I), comparison intervention or issue of interest (C), outcome(s) of interest (O), and time it takes for the intervention to achieve the outcome(s) (T). When Rebecca asks why the PICOT question is so important, Carlos explains that it's a consistent, systematic way to identify the components of a clinical issue. Using the PICOT format to structure the clinical question helps to clarify these components, which will guide the search for the evidence.6, 7 A well-built PICOT question increases the likelihood that the best evidence to inform practice will be found quickly and efficiently.5-8

Comparison of Background and Foreground Questions4-6

9781469813271_13-1_FIG.jpg

To help Rebecca learn to formulate a PICOT question, Carlos uses the earlier example of a foreground question: “In adult patients undergoing surgery, how does guided imagery compared with music therapy affect analge­sia use within the first 24 hours post-op?” In this example, “adult patients undergoing surgery” is the population (P), “guided imag­ery” is the intervention of interest (I), “music therapy” is the comparison intervention of interest (C), “pain” is the outcome of in­terest (O), and “the first 24 hours post-op” is the time it takes for the intervention to achieve the outcome (T). In this example, music therapy or guided imagery is expected to affect the amount of analgesia used by the patient within the first 24 hours after sur­gery. Note that a comparison may not be pertinent in some PICOT questions, such as in “meaning questions,” which are designed to uncover the meaning of a particular experience.3, 6 Time is also not always required. But population, intervention or issue of interest, and outcome are es­sential to developing any PICOT question.

Carlos asks Rebecca to reflect on the clinical situation on her unit in order to determine the unit's current intervention for ad­dressing acuity. Reflection is a strategy to help clinicians extract critical components from the clin­ical issue to use in formulating the clinical question.3 Rebecca and Carlos revisit aspects of the clinical issue to see which may be­come components of the PICOT question: the high acuity of pa­tients on the unit, the number of cardiac arrests, the unplanned ICU admissions, and the research article on rapid response teams. Once the issue is clarified, the PICOT question can be written.

Templates and Definitions for PICOT Questions5, 6

9781469813271_14-1_FIG.jpg

Because Rebecca's issue of in­terest is the rapid response team—an intervention—Carlos provides her with an “intervention or ther­apy” template to use in formulating the PICOT question. (For other types of templates, see Tem­plates and Definitions for PICOT Questions.5, 6) Since the hos­pital doesn't have a rapid response team and doesn't have a plan for addressing acuity issues before a crisis occurs, the comparison, or (C) element, in the PICOT question is “no rapid response team.” “Cardiac arrests” and “unplanned admissions to the ICU” are the outcomes in the question. Other potential outcomes of interest to the hospital could be “lengths of stay” or “deaths.”

Practice Creating a PICOT Question

Scenario 1: You're a recent graduate with two years' experience in an acute care setting. You've taken a position as a home health care nurse and you have several adult patients with various medical conditions. However, you've recently been assigned to care for hospice patients. You don't have experience in this area, and you haven't experienced a loved one at the end of life who's received hospice care. You notice that some of the family members or caregivers of patients in hospice care are withdrawn. You're wondering what the family caregivers are going through, so that you might better un­derstand the situation and provide quality care.

Scenario 2: You're a new graduate who's accepted a position on a gerontology unit. A number of the patients have dementia and are showing aggressive behavior. You recall a clinical experience you had as a first-year nursing student in a long-term care unit and remember seeing many of the patients in a specialty unit for dementia walking around holding baby dolls. You're wondering if giving baby dolls to your patients with dementia would be helpful.

What type of PICOT question would you create for each of these scenarios? Select the appropriate templates and formulate your questions.

Rebecca proposes the following PICOT question: “In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac ar­rests (O) and unplanned admis­sions to the ICU (O) during a three-month period (T)?”

Now that Rebecca has formulated the clinical question, she's ready for the next step in the EBP process, searching for the evidence. Carlos congratulates Rebecca on developing a searchable, answerable question and arranges to meet with her again to mentor her in helping her find the answer to her clinical question. The fourth article in this series, to be published in the May issue of AJN, will focus on strat­egies for searching the literature to find the evidence to answer the clinical question.

Now that you've learned to formulate a successful clinical question, try this exercise: after reading the two clinical scenarios in Practice Creating a PICOT Question, select the type of ­clinical question that's most ap­propriate for each scenario, and choose a template to guide you. Then formulate one PICOT ques­tion for each scenario. Suggested PICOT questions will be provided in the next column.

Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program at Arizona State University in Phoenix, where Ellen Fineout-Overholt is clinical professor and director of the Center for the Advance­ment of Evidence-Based Practice, Ber­nadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Prac­tice. Contact author: Susan B. Stillwell, ­[email protected].

REFERENCES

  1. Melnyk BM, et al. Igniting a spirit of inquiry: an essential foundation for evidence-based practice. Am J Nurs 2009;109(11):49-52.

  2. Dacey MJ, et al. The effect of a rapid response team on major clinical outcome measures in a community hospital. Crit Care Med 2007;35(9):2076-82.

  3. Fineout-Overholt E, Johnston L. Teaching EBP: asking searchable, an­swerable clinical questions. World­views Evid Based Nurs 2005;2(3):157-60.

  4. Nollan R, et al. Asking compelling clinical questions. In: Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and health­care: a guide to best practice. Philadelphia: Lippincott Williams and Wilkins; 2005. p. 25-38.

  5. Straus SE. Evidence-based medicine: how to practice and teach EBM. 3rd ed. Edinburgh; New York: Elsevier/Churchill Livingstone; 2005.

  6. Fineout-Overholt E, Stillwell SB. Asking compelling questions. In: Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins.

  7. McKibbon KA, Marks S. Posing clinical questions: framing the question for scientific inquiry. AACN Clin Issues 2001;12(4):477-81.

  8. Fineout-Overholt E, et al. Teaching EBP: getting to the gold: how to search for the best evidence. Worldviews Evid Based Nurs 2005;2(4):207-11.

By Susan B. Stillwell, DNP, RN, CNE,
Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, and Kathleen M.
Williamson, PhD, RN

Searching for the Evidence

Strategies to help you conduct a successful search.

This is the fourth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

In the previous article in this series, our hypothetical nurse, Rebecca R., with the help of one of her hospital's expert evidence-based practice (EBP) mentors, Carlos A., learned Step 1 of the EBP process—how to formulate a clinical question. The impetus behind her desire to develop her question, as you may re­call in our case scenario, was that Rebecca's nurse manager asked her to search for more evidence to support her idea of using a rapid response team to decrease rates of in-hospital cardiac arrests and unplanned ICU admissions—both of which were on the rise on Rebecca's medical–surgical unit. She learned of the idea of a rapid response team from a study she read on the subject in Critical Care Medicine.1

Here is the clinical question Rebecca formulated: “In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)? Her question, called a PICOT question, contains the following elements: patient population (P), intervention of interest (I), comparison ­intervention of interest (C), outcome(s) of interest (O), and time it takes for the intervention to achieve the outcome(s) (T). (To review PICOT questions and how to formulate them, see “Asking the Clinical Question: A Key Step in Evidence-Based Practice,” March.)

This month Rebecca begins Step 2 of the EBP process, searching for the evidence. For an over­view of this step, see How to Search for Evidence to Answer the Clinical Question.

THE BEST EVIDENCE TO ANSWER THE CLINICAL QUESTION

In their next meeting, Carlos and Rebecca discuss what type of evidence will best answer her clinical question. Carlos explains that knowing the type of PICOT question you're asking (for example, is it an intervention, etiology, diagnosis, prognosis, or meaning question?) will help you determine the best type of study design to search for. Rebecca's PICOT question is an intervention question because it compares two possible interventions—a rapid response team versus no rapid response team.

How to Search for Evidence to Answer the Clinical Question

1. Identify the type of PICOT question.

2. Determine the level of evidence that best answers the question.

3. Select relevant databases to search (such as the CDSR, DARE, PubMed, CINAHL).

4. Use keywords from your PICOT question to search the databases.

5. Streamline your search with the following strategies:

    • Use database controlled vocabulary (such as “MeSH terms”).

    • Combine searches by using the Boolean connector “AND.”

    • Limit the final search by selecting defining parameters (such as “humans” or “English”).

Determine the level of evidence. Research evidence, also called external evidence, can be viewed from a hierarchical per­spective. The best external evi­dence (that which provides the most reliable information) is at the top of the list and the least ­reliable is at the bottom (see Hierarchy of Evidence for Inter­vention Studies2). The level and quality of the evidence are important to clinicians because they give them the confidence they need to make clinical decisions. The research methodology that provides the best evidence will differ depending on the type of clinical question asked. To answer a question that includes an in­tervention, such as Rebecca's question, a systematic review of randomized, controlled trials or a metaanalysis in which studies are compared using statistical analysis is the best study design.2-5 When well designed and executed, these studies provide the strongest evidence, and therefore the most confidence for clinical decision making.

“What happens when there isn't a metaanalysis or systematic review available?” Rebecca asks. Carlos replies that the next-best evidence would be Level II evidence, the findings of a randomized, controlled trial. Carlos reminds Rebecca that when de­ciding whether to use evidence to support a practice change, it's important to consider both the level and quality of the evidence as well as the feasibility of implementing the intervention.

WHERE TO FIND THE EVIDENCE

Rebecca and Carlos set up an appointment with Lynne Z., the hospital librarian, to learn how to begin searching for the evidence. Lynne tells Rebecca and Carlos that no matter what type of question is being asked, it's wise to search more than one database. Because databases index different journals, searching several databases will reduce the possibility of missing relevant literature.

Select relevant databases to search. To find evidence to an­swer Rebecca's PICOT question, Lynne recommends searching the following databases:

• the Cochrane Database of Systematic Reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE), which are found in the Co­chrane Library and can be accessed through the Cochrane Collaboration Web site (www.cochrane.org)

• PubMed, which includes MEDLINE (www.ncbi.nlm.nih.gov/pubmed)

• CINAHL (www.ebscohost.com/cinahl), an acronym for Cumulative Index to Nursing and Allied Health Literature

Hierarchy of Evidence for Intervention Studies2

9781469813271_19-1_FIG.jpg

The CDSR and DARE databases contain systematic reviews and metaanalyses of randomized, controlled trials. The reviews conducted by the Cochrane Col­laboration are contained in the CDSR, and abstracts of systematic reviews not conducted by Cochrane are indexed in the DARE. Cochrane reviews are considered to have the strongest level of evidence for intervention questions because they have the best study designs and are generally the most rigorous.

To find other types of evidence, databases other than CDSR and DARE must be searched. Because the intervention—rapid response team—is a multidisciplinary, in­terprofessional initiative, evidence to answer Rebecca's question may be found in medical as well as in nursing and allied health journals. Therefore, the PubMed database, which contains medical and life sciences literature, and the CINAHL database, which contains nursing and allied health literature, should be searched. Abstracts can be reviewed and accessed free of charge in the Cochrane Library and PubMed databases (although a fee may be required to obtain electronic copies of reviews or articles), but a subscription is required to access CINAHL.

SEARCHING STRATEGIES

Now that Rebecca and Carlos have decided what databases to search, they need to select the keywords they'll use to begin their search.

Choose keywords from the PICOT question. Rebecca and Carlos identify the following keywords from her PICOT question: hospitalized adults, rapid response team, cardiac arrests, and ICU admissions. Lynne recommends that in cases when a database has its own indexing language, or controlled vocabulary, the search be conducted with these index terms. In this way, the search will be the most inclusive.

Use database controlled vocabulary. For example, when the keyword rapid response team is entered into PubMed, the PubMed database matches it to the controlled vocabulary term “Hospital Rapid Response Team.” All articles that contain the topic of hospital rapid response teams can be found by searching with this one index term. Using controlled vocabulary in a search saves time and helps prevent the chance of missing evidence that could answer the clinical question.

If the index terms matched by the database aren't relevant to the searcher's keyword, then the keyword and its synonyms should be used to search the database. It's helpful, though rare, when a keyword and an index term match perfectly. More often, the searcher will need to determine which of several database index terms is closest in meaning to the keyword.

Combine searches. Each keyword in the PICOT question is searched individually. However, keyword searches can result in a large number of articles. For example, a CINAHL search of cardiac arrest resulted in more than 2,700 articles and a search of rapid response team resulted in 100 articles. But combining the searches using the Boolean connector “AND” (for example, cardiac arrest AND rapid response team) yielded a more manageable 12 articles that contained both concepts and were more likely to answer the clinical question. (Note that databases index articles on a regular basis; therefore, the same search conducted at different times will likely produce different numbers of articles.)

Rebecca and Carlos want to combine their searches because they're interested in finding articles that contain all of the keywords (hospitalized adults AND rapid response team AND cardiac arrests AND ICU admissions). After they enter each keyword into the selected database and search it individually, they'll combine all the searches using the Boolean connector “AND.” There's a chance, however, that combining the searches may re­sult in few or even no articles. For example, the first time Rebecca searched PubMed using its controlled vocabulary for her PICOT keywords, and then combined the searches, the database came up with only one article. She decided to refocus her search, hoping that including only the intervention and outcomes keywords, and not the patient population, would produce articles relevant to her clinical issue.

Place limits on the final combined search to further narrow the results. This strategy can eliminate articles written in lan­guages other than English or those in which animals, and not hu­mans, are the subjects. Other ­limits—such as age or sex of subjects or type of article (such as clinical trial, editorial, or review)—are available; however, placing too many limits on a search may produce too few or even no articles.

CONDUCTING THE SEARCH

Rebecca begins to search the PubMed database for the evidence to answer her PICOT question. She and Carlos will be searching the keywords rapid response team, the intervention of interest, and cardiac arrests and ICU admissions, the outcomes of interest. To follow along, access the PubMed home page at www.ncbi.nlm.nih.gov/pubmed. (Note that because new articles are added to the database regularly, your search results may not match those described here.)

Rebecca starts by using PubMed's Medical Subject Heading (MeSH) database to search for the intervention keyword, rapid response team. From the PubMed home page, she clicks on “MeSH Database” (see Figure 1). On the MeSH database screen, she types rapid response team in the search field and clicks “Go” (see Figure 2). Rapid response team is a direct match to the one MeSH term provided—“Hospital Rapid Response Team” (see Figure 3). Rebecca selects this term by click­ing the box next to it and then selects “Search Box with AND” from the pull-down menu. “‘Hos­pital Rapid Response Team' [Mesh]” appears in the search box on the next screen (see Figure 4); Rebecca clicks on “Search PubMed.” Her search is performed and results in 19 articles (see Figure 5). She notes that most but not all articles appear to be relevant to the clinical question, and that they date back only to 2009 because the MeSH term “Hospital Rapid Response Team” was recently introduced.

Before Rebecca continues with her MeSH database searches, Lynne suggests that she use rapid response team in a separate search because the search will be broader than a MeSH term search and may yield additional useful articles.

From the results page, Rebecca enters rapid response team in the search field and clicks “Search.” This search produces over 300 articles (see Figure 6); however, many of them still don't appear to be relevant to the clinical question. Lynne reassures Rebecca that eventually combining her searches will help weed out the irrelevant articles. (Because this search produced so many more articles than her MeSH term search, which captured only the most recent articles, Lynne suggests that when Rebecca com­bines her searches, she use the results of her keyword rapid response team search, not her “Hospital Rapid Response Team” search.

Rebecca continues to use the MeSH database to search her two remaining keywords. For each one, she starts back on the PubMed home page (click on the PubMed.gov logo on any results page to get to the home page).

Again, she enters cardiac arrest on the MeSH database screen. Of the three MeSH terms provided she selects “heart arrest,” which yields over 25,000 articles. Since the keyword ICU admissions produces no MeSH terms, Lynne advises Rebecca to search with the keyword intensive care units, which matches perfectly with the MeSH term “Intensive Care Units” and yields more than 40,000 articles. After searching her keyword and appropriate MeSH terms, Rebecca has a total of more than 60,000 articles.

Lynne reassures Rebecca that she won't need to read all 60,000 articles. She explains that the next step, combining the searches, will eliminate extraneous articles and focus on the search results specific to the clinical question. Combining the searches by using the Boolean connector “AND” will produce a list of articles that contain all three keywords Rebecca searched.

9781469813271_4FF1_FIG.jpg

Figure 1. Select “MeSH Database” on the PubMed home page.

9781469813271_4FF2_FIG.jpg

Figure 2. Type rapid response team in the search field and click “Go.”

9781469813271_4FF3_FIG.jpg

Figure 3. Select the MeSH term “Hospital Rapid Response Team,” then select “Search Box with AND” from the pull-down menu.

9781469813271_4FF4_FIG.jpg

Figure 4. Click on “Search PubMed.”

9781469813271_4FF5_FIG.jpg

Figure 5. The “Hospital Rapid Response Team” search yields 19 articles.

9781469813271_4FF6_FIG.jpg

Figure 6. Type rapid response team in the search field and click “Search”; this search results in more than 300 articles.

To combine her searches, Rebecca selects the “Advanced Search” tab at the top of any results page. Each of her searches now appears on the Advanced Search page in the “Search History” box. Lynne reminds Rebecca to clear the search field at the top of the page of any keywords from past searches before combining the final group of searches.

Rebecca clicks on the number assigned to her rapid response team keyword search and selects AND from the pull-down “Options” menu. Lynne shows her that the number assigned to her keyword search now appears in the search field at the top of the page. Rebecca continues to select her individual searches and, one by one, their corresponding numbers appear in the field above (see Figure 7). To run the combined searches and view the results, Rebecca selects the “Search” tab.

Her combined search produces 11 articles (see Figure 8), a much more manageable number to review for relevancy to the clinical question than the more than 60,000 articles produced by the individual keyword and controlled vocabulary searches.

Rebecca asks Lynne if she can request the three free full-text articles (see “Free Full Text (3)” under “Filter your results” on the upper right of the results page; Figure 8). Lynne informs her that she can ap­ply any number of limits to her search, including “Links to free full text.” However, the more limits applied, the narrower the search, and evidence to answer the clinical question may be missed.

Lynne shows Rebecca where “Limits” can be found on the top of the Advanced Search page (Figure 7). She suggests that Rebecca consider limiting the ages of her population to further reduce her results. If she eliminates the pediatric population, for example, the number of articles produced by her search should decrease. But Rebecca thinks that any articles that include children may be of interest to the nurses on the pediatric unit, so she decides to limit her search to only “Humans” and “English” (Figure 9). Applying these limits to Rebecca's final combined search reduces the re­sults from 11 articles to 10.

9781469813271_4FF7_FIG.jpg

Figure 7. Combine the individual searches.

9781469813271_4FF8_FIG.jpg

Figure 8. The final results.

9781469813271_4FF9_FIG.jpg

Figure 9. Using limits to narrow the search.

Rebecca asks Lynne if any of the articles retrieved in the search are metaanalyses, which she re­members is the best study design to answer her clinical question. Lynne responds that a quick way to find out is by going back to the Limits page and selecting “Meta-Analysis” (see Figure 9). Although this didn't produce any results, limiting the search to “Randomized Controlled Trial” resulted in one article.

As Rebecca's session in searching PubMed concludes, Lynne explains to Carlos and Rebecca that searching is a skill that im­proves with practice. Moreover, each database may have its own controlled vocabulary and limits. In any search, Lynne emphasizes the importance of

Solutions to Our “Practice Creating a PICOT Question” Exercise

Did your questions come close to these?

Scenario 1: A meaning question.

How do family caregivers (P) with relatives receiving hospice care (I) perceive the loss of their relative (O) during end of life (T)?

Scenario 2: An intervention or therapy question.

In patients with dementia who are agitated (P), how does baby doll therapy (I) compared with risperidone (or antipsychotic drug therapy) (C) affect behavior outbursts (O) within one month (T)?

• searching at least two databases

• searching one keyword at a time

• using the database's controlled vocabulary when available

• combining the searches to yield articles that are manageable in number and relate specifically to the PICOT question

• applying “Humans” and “English” limits to the final search

Rebecca is excited to practice her searching skills to find the answer to her clinical question. She and Carlos set up a time to search the Cochrane and CINAHL databases. Carlos reminds Rebecca that although considering the level of evidence when making a clinical decision is important, it's not the only fac­tor. The decision should also be based on the quality of the evidence, the feasibility of implementing a change in the hospital, and a consideration of the patients' values and preferences.

In the next article in this series, to be published in the July issue of AJN, Rebecca gathers all the articles relevant to her PICOT question and meets with Carlos to learn how to critically appraise the evidence. You're invited to this meeting to learn, along with Rebecca, how to select “keeper” studies that, when synthesized, will help determine if a practice change should be implemented at her hospital.

Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program at Ar­izona State University in Phoenix, where Ellen Fineout-Overholt is clinical professor and director of the Center for the Ad­vancement of Evidence-Based Practice, Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Prac­tice. Contact author: Susan B. Stillwell, [email protected].

REFERENCES

  1. Dacey MJ, et al. The effect of a rapid response team on major clin­ical outcome measures in a community hospital. Crit Care Med 2007;35(9):2076-82.

  2. Melnyk BM, Fineout-Overholt E. Making the case for evidence-based practice. In: Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice. 1st ed. Phila­delphia: Lippincott Williams and Wilkins; 2005. p. 3-24.

  3. DiCenso A, et al. Introduction to ­evidence-based nursing. In: DiCenso A, et al., editors. Evidence-based nurs­ing: a guide to clinical practice. St. Louis: Elsevier Mosby; 2005. p. 3-19.

  4. Gibson F, Glenny A. Critical appraisal of quantitative studies: is the quality of the study good enough for you to use the findings? In: Craig JV, Smyth RL, editors. The evidence-based practice manual for nurses. 2nd ed. Edinburgh; New York: Churchill Livingstone Elsevier; 2007. p. 95-122.

  5. Fineout-Overholt E, et al. Finding relevant evidence. In: Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice. 1st ed. Philadelphia: Lippincott Williams and Wilkins; 2005. p. 39-69.

Part II: Critically Appraising the Evidence

By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN

Critical Appraisal of the Evidence: part I

An introduction to gathering, evaluating, and recording the evidence.

This is the fifth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence - Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

In May's evidence-based practice (EBP) article, Rebecca R., our hypothetical staff nurse, and Carlos A., her hospital's expert EBP mentor, learned how to search for the evidence to answer their clinical question (shown here in PICOT format): “In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)?” With the help of Lynne Z., the hospital librarian, Rebecca and Car­los searched three databases, PubMed, the Cumulative Index of Nursing and Allied Health Literature (CINAHL), and the Cochrane Database of Systematic Reviews. They used keywords from their clinical question, including ICU, rapid response team, cardiac arrest, and unplanned ICU admissions, as well as the following synonyms: failure to rescue, never events, medical emergency teams, rapid response systems, and code blue. Whenever terms from a database's own indexing language, or controlled vocabulary, matched the keywords or synonyms, those terms were also searched. At the end of the database searches, Rebecca and Carlos chose to retain 18 of the 18 studies found in PubMed; six of the 79 studies found in CINAHL; and the one study found in the Cochrane Database of Systematic Reviews, because they best answered the clinical question.

As a final step, at Lynne's recommendation, Rebecca and Carlos conducted a hand search of the reference lists of each study they retained looking for any relevant studies they hadn't found in their original search; this process is also called the ancestry method. The hand search yielded one ad­ditional study, for a total of 26.

RAPID CRITICAL APPRAISAL

The next time Rebecca and Carlos meet, they discuss the next step in the EBP process—critically appraising the 26 studies. They ­obtain copies of the studies by printing those that are immediately available as full text through library subscription or those flagged as “free full text” by a database or journal's Web site. Others are available through interlibrary loan, when another hos­pital library shares its articles with Rebecca and Carlos's hospital ­library.

Carlos explains to Rebecca that the purpose of critical appraisal isn't solely to find the flaws in a study, but to determine its worth to practice. In this rapid critical appraisal (RCA), they will review each study to determine

• its level of evidence.

• how well it was conducted.

• how useful it is to practice.

Once they determine which studies are “keepers,” Rebecca and Carlos will move on to the final steps of critical appraisal: evaluation and synthesis (to be discussed in the next two installments of the series). These final steps will determine whether overall findings from the evidence review can help clinicians improve patient outcomes.

Rebecca is a bit apprehensive because it's been a few years since she took a research class. She shares her anxiety with Chen M., a fellow staff nurse, who says she never studied research in school but would like to learn; she asks if she can join Carlos and ­Rebecca's EBP team. Chen's spirit of inquiry encourages Rebecca, and they talk about the opportunity to learn that this project affords them. Together they speak with the nurse manager on their ­medical–surgical unit, who agrees to let them use their allotted continuing education time to work on this project, after they discuss their expectations for the project and how its outcome may benefit the patients, the unit staff, and the hospital.

Learning research terminology. At the first meeting of the new EBP team, Carlos provides ­Rebecca and Chen with a glossary of terms so they can learn basic research terminology, such as sam­ple, independent variable, and dependent variable. The glossary also defines some of the study designs the team is likely to come across in doing their RCA, such as systematic review, randomized controlled trial, and cohort, qualitative, and descriptive studies. (For the definitions of these terms and others, see the glossaries provided by the Center for the Advancement of Evidence-Based Practice at the Arizona State University College of Nursing and Health Innovation [http://nursingandhealth.asu.edu/evidence-based-practice/resources/glossary.htm] and the Boston ­University Medical Center Alumni Medical Library [http://medlib.bu.edu/bugms/content.cfm/content/ebmglossary.cfm#R].)

Determining the level of evidence. The team begins to divide the 26 studies into categories according to study design. To help in this, Carlos provides a list of several different study designs (see Hierarchy of Evidence for Intervention Studies). Rebecca, Carlos, and Chen work together to determine each study's design by reviewing its abstract. They also create an “I don't know” pile of studies that don't appear to fit a specific design. When they find studies that don't actively answer the clinical question but may inform thinking, such as ­descriptive research, expert opinions, or guidelines, they put them aside. Carlos explains that they'll be used later to support Rebecca's case for having a rapid response team (RRT) in her hospital, sh­ould the evidence point in that direction.

After the studies—including those in the “I don't know” group—are categorized, 15 of the original 26 remain and will be ­included in the RCA: three systematic reviews that include one meta-analysis (Level I evidence), one randomized controlled trial (Level II evidence), two cohort studies (Level IV evidence), one retrospective pre-post study with historic controls (Level VI evidence), four preexperimental (pre-post) intervention studies (no control group) (Level VI ­evidence), and four EBP implementation projects (Level VI ­evidence). Carlos reminds Rebecca and Chen that Level I ­evidence—a systematic review of randomized controlled trials or a meta-analysis—is the most reliable and the best evidence to answer their clinical question.

Hierarchy of Evidence for Intervention Studies

9781469813271_29-1_FIG.jpg

Using a critical appraisal guide. Carlos recommends that the team use a critical appraisal checklist (see Critical Appraisal Guide for Quantitative Studies) to help evaluate the 15 studies. This checklist is relevant to all studies and contains questions about the essential elements of research (such as, pur­pose of the study, sample size, and major variables).

Critical Appraisal Guide for Quantitative Studies

1. Why was the study done?

    • Was there a clear explanation of the purpose of the study and, if so, what was it?

2. What is the sample size?

    • Were there enough people in the study to establish that the findings did not occur by chance?

3. Are the instruments of the major variables valid and reliable?

    • How were variables defined? Were the instruments designed to measure a concept valid (did they measure what the researchers said they measured)? Were they reliable (did they measure a concept the same way every time they were used)?

4. How were the data analyzed?

    • What statistics were used to determine if the purpose of the study was achieved?

5. Were there any untoward events during the study?

    • Did people leave the study and, if so, was there something special about them?

6. How do the results fit with previous research in the area?

    • Did the researchers base their work on a thorough literature review?

7. What does this research mean for clinical practice?

    • Is the study purpose an important clinical issue?


Adapted with permission from Melnyk BM, Fineout-Overholt E, editors. Evidence-based practice in nursing and healthcare: a guide to best practice [forthcoming]. 2nd ed. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wilkins.

The questions in the critical ap­praisal guide seem a little strange to Rebecca and Chen. As they review the guide together, Carlos explains and clarifies each question. He suggests that as they try to figure out which are the essential elements of the studies, they focus on answering the first three questions: Why was the study done? What is the sample size? Are the instruments of the major variables valid and reliable? The remaining questions will be addressed later on in the critical ­appraisal process (to ­appear in ­future installments of this series).

Creating a study evaluation table. Carlos provides an online template for a table where Rebecca and Chen can put all the data they'll need for the RCA. Here they'll record each study's essential elements that answer the three questions and begin to appraise the 15 studies. (To use this template to create your own evaluation table, download the Evaluation Table Template at http://links.lww.com/AJN/A10.)

EXTRACTING THE DATA

Starting with level I evidence studies and moving down the hierarchy list, the EBP team takes each study and, one by one, finds and enters its essential elements into the first five columns of the evaluation table (see Table 1; to see the entire table with all 15 studies, go to http://links.lww.com/AJN/A11). The team discusses each element as they enter it, and tries to determine if it meets the criteria of the critical appraisal guide. These elements—such as purpose of the study, sample size, and major variables—are typical parts of a research report and should be presented in a pre­dictable fashion in every study so that the reader understands what's being reported.

As the EBP team continues to review the studies and fill in the evaluation table, they realize that it's taking about 10 to 15 minutes per study to locate and enter the information. This may be because when they look for a description of the sample, for example, it's important that they note how the sample was obtained, how many patients are included, other characteristics of the sample, as well as any diagnoses or illnesses the sample might have that could be important to the study outcome. They discuss with Carlos the likelihood that they'll need a few sessions to enter all the data into the table. Carlos responds that the more studies they do, the less time it will take. He also says that it takes less time to find the information when study reports are clearly written. He adds that usually the important information can be found in the abstract.

Rebecca and Chen ask if it would be all right to take out the “Conceptual Framework” column, since none of the studies they're reviewing have conceptual frameworks (which help guide researchers as to how a study should proceed). Carlos ­replies that it's helpful to know that a study has no framework underpinning the research and suggests they leave the column in. He says they can further discuss this point later on in the process when they synthesize the studies' findings. As Rebecca and Chen review each study, they enter its citation in a separate reference list so that they won't have to create this list at the end of the pro­­cess. The reference list will be shared with colleagues and placed at the end of any RRT policy that results from this ­endeavor.

Carlos spends much of his time answering Rebecca's and Chen's questions concerning how to phrase the information they're entering in the table. He suggests that they keep it simple and consistent. For example, if a study indicated that it was implementing an RRT and hoped to see a change in a certain outcome, the nurses could enter “change in [the outcome] after RRT” as the purpose of the study. For studies examining the effect of an RRT on an outcome, they could say as the purpose, “effect of RRT on [the outcome].” Using the same words to describe the same purpose, even though it may not have been stated exactly that way in the study, can help when they compare studies later on.

Rebecca and Chen find it frustrating that the study data are not always presented in the same way from study to study. They ask Carlos why the authors or journals wouldn't present similar information in a similar manner. Carlos explains that the purpose of publishing these studies may have been to disseminate the find­ings, not to compare them with other like studies. Rebecca realizes that she enjoys this kind of conversation, in which she and Chen have a voice and can contribute to a deeper understanding of how research impacts practice.

Table 1. Evaluation Table, Phase I

9781469813271_5TT1_FIG.jpg

9781469813271_5TT1a_FIG.jpg

As Rebecca and Chen continue to enter data into the table, they begin to see similarities and differences across studies. They mention this to Carlos, who tells them they've begun the process of synthesis! Both nurses are encouraged by the fact that they're learning this new skill.

The MERIT trial is next in the stack of studies and it's a good trial to use to illustrate this phase of the RCA process. Set in Australia, the MERIT trial1 examined whether the introduction of an RRT (called a medical emergency team or MET in the study) would reduce the incidence of cardiac arrest, unplanned admissions to the ICU, and death in the hospitals studied. See Table 1 to follow along as the EBP team finds and enters the trial data into the table.

Design/Method. After Rebecca and Chen enter the citation information and note the lack of a con­ceptual framework, they're ready to fill in the “Design/Method” column. First they enter RCT for randomized controlled trial, which they find in both the study title and introduction. But MERIT is called a “cluster-­randomised controlled trial,” and cluster is a term they haven't seen before. Carlos explains that it means that hospitals, not individuals or patients, were randomly assigned to the RRT. He says that the likely reason the researchers chose to randomly assign hospitals is that if they had randomly assigned ­individual patients or units, others in the hospital might have heard about the RRT and potentially influenced the outcome. To randomly assign hospitals ­(instead of units or patients) to the intervention and comparison groups is a cleaner research design.

To keep the study purposes con­sistent among the studies in the RCA, the EBP team uses inclusive terminology they developed after they noticed that different trials had different ways of describing the same objectives. Now they write that the purpose of the MERIT trial is to see if an RRT can reduce CR, for cardiopulmonary arrest or code rates, HMR, for hospital-wide mortality rates, and UICUA for unplanned ICU admissions. They use those same terms consistently throughout the evaluation table.

Sample/Setting. A total of 23 hospitals in Australia with an average of 340 beds per hospital is the study sample. Twelve hospitals had an RRT (the intervention group) and 11 hospitals didn't (the control group).

Major Variables Studied. The independent variable is the variable that influences the outcome (in this trial, it's an RRT for six months). The dependent vari­able is the outcome (in this case, HMR, CR, and UICUA). In this trial, the outcomes didn't include do-not-resuscitate data. The RRT was made up of an attending phy­sician and an ICU or ED nurse.

While the MERIT trial seems to perfectly answer Rebecca's PICOT question, it contains elements that aren't entirely relevant, such as the fact that the researchers collected information on how the RRTs were activated and provided their protocol for calling the RRTs. However, these elements might be helpful to the EBP team later on when they make decisions about implementing an RRT in their hospital. So that they can come back to this information, they place it in the last column, “Appraisal: Worth to Practice.”

After reviewing the studies to make sure they've captured the essential elements in the evaluation table, Rebecca and Chen still feel unsure about whether the information is complete. Carlos ­reminds them that a system-wide practice change—such as the change Rebecca is exploring, that of implementing an RRT in her hospital—requires careful consideration of the evidence and this is only the first step. He cautions them not to worry too much about perfection and to put their efforts into understanding the ­information in the studies. He reminds them that as they move on to the next steps in the critical appraisal process, and learn even more about the studies and projects, they can refine any data in the table. Rebecca and Chen feel uncomfortable with this uncertainty but decide to trust the process. They continue extracting data and entering it into the table even though they may not completely understand what they're entering at present. They both ­realize that this will be a learning opportunity and, though the le­arning curve may be steep at times, they value the outcome of improving patient care enough to continue the work—as long as Carlos is there to help.

In applying these principles for evaluating research studies to your own search for the evidence to answer your PICOT question, ­remember that this series can't contain all the available infor­mation about research meth­od­ology. Fortunately, there are many good resources available in books and online. For example, to find out more about sample size, which can affect the likelihood that researchers' results oc­cur by chance (a random finding) rather than that the intervention brought about the expected outcome, search the Web using terms that describe what you want to know. If you type sample size findings by chance in a search engine, you'll find several Web sites that can help you better understand this study essential.

Be sure to join the EBP team in the next installment of the series, “Critical Appraisal of the Evi­dence: part II,” when Rebecca and Chen will use the MERIT trial to illustrate the next steps in the RCA process, complete the rest of the evaluation table, and dig a little deeper into the studies in order to detect the “keepers.”

Ellen Fineout-Overholt is clinical professor and director of the Center for the Advancement of Evidence-Based Practice at Arizona State University in Phoenix, where Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Ellen Fineout-Overholt, [email protected].

REFERENCE

  1. Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised con­trolled trial. Lancet 2005;365(9477):2091-7.

By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN

Critical Appraisal of the Evidence: part II

Digging deeper—examining the “keeper” studies.

This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

In July's evidence-based practice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital's expert EBP mentor, and Chen M., Rebecca's nurse colleague, col­lected the evidence to an­swer their clinical question: “In ­hospitalized adults (P), how does a rapid response team (I) compared with no rapid ­response team (C) affect the number of cardiac arrests (O) and unplanned ­admissions to the ICU (O) during a three-month period (T)?” As part of their rapid ­critical appraisal (RCA) of the 15 potential “keeper” studies, the EBP team found and placed the essential elements of each study (such as its population, study design, and setting) into an evaluation table. In so doing, they began to see similarities and differ­ences between the studies, which Carlos told them is the beginning of synthesis. We now join the team as they continue with their RCA of these studies to determine their worth to practice.

RAPID CRITICAL APPRAISAL

Carlos explains that typically an RCA is conducted along with an RCA checklist that's specific to the research design of the study being evaluated—and before any data are entered into an evaluation table. However, since Rebecca and Chen are new to appraising studies, he felt it would be easier for them to first enter the essentials into the table and then eval­uate each study. Carlos shows Rebecca several RCA checklists and explains that all checklists have three major questions in common, each of which contains other more specific subquestions about what constitutes a well-conducted study for the research design under review (see Example of a Rapid Critical Appraisal Checklist).

Although the EBP team will be looking at how well the re­­searchers conducted their studies and discussing what makes a “good” research study, Carlos reminds them that the goal of critical appraisal is to determine the worth of a study to practice, not solely to find flaws. He also suggests that they consult their glossary when they see an unfamiliar word. For example, the term randomization, or random assignment, is a relevant feature of research methodology for in­tervention studies that may be unfamiliar. Using the glossary, he explains that random assignment and random sampling are often confused with one another, but that they're very different. When researchers select subjects from within a certain population to participate in a study by using a random strategy, such as tossing a coin, this is random sampling. It allows the entire population to be fairly represented. But because it requires access to a particular population, random sampling is not always feasible. Carlos adds that many health care studies are based on a convenience sample—participants recruited from a readily available population, such as a researcher's affiliated hospital, which may or may not represent the desired population. Random assignment, on the other hand, is the use of a random strategy to assign study participants to the intervention or control group. Random assignment is an important feature of higher-level studies in the hierarchy of evidence.

Example of a Rapid Critical Appraisal Checklist

Rapid Critical Appraisal of Systematic Reviews of Clinical Interventions or Treatments

9781469813271_38-1_FIG.jpg

Carlos also reminds the team that it's important to begin the RCA with the studies at the highest level of evidence in order to see the most reliable evidence first. In their pile of studies, these are the three systematic reviews, including the meta-analysis and the Cochrane review, they retrieved from their database search (see “Searching for the Evidence,” and “Critical Appraisal of the Evidence: part I,” Evidence-Based Practice, Step by Step, May and July). Among the RCA checklists Carlos has brought with him, Rebecca and Chen find the checklist for systematic reviews.

As they start to rapidly critically appraise the meta-analysis, they discuss that it seems to be biased since the authors included only studies with a control group. Carlos explains that while having a control group in a study is ideal, in the real world most studies are lower-level evidence and don't have control or compari­son groups. He emphasizes that, in eliminating lower-level studies, the meta-analysis lacks evidence that may be informative to the question. Rebecca and Chen—who are clearly growing in their appraisal skills—also realize that three studies in the meta-analysis are the same as three of their po­tential “keeper” studies. They wonder whether they should keep those studies in the pile, or if, as duplicates, they're unnecessary. Carlos says that because the meta-analysis only included studies with control groups, it's important to keep these three studies so that they can be compared with other studies in the pile that don't have control groups. Rebecca notes that more than half of their 15 studies don't have control or comparison groups. They agree as a team to include all 15 stud­ies at all levels of evidence and go on to appraise the two remaining systematic reviews.

The MERIT trial1 is next in the EBP team's stack of studies. As we noted in the last installment of this series, MERIT is a good study to use to illustrate the different steps of the critical appraisal process. (Readers may want to retrieve the article, if possible, and follow along with the RCA.) Set in Australia, the MERIT trial examined whether the introduction of a rapid re­­sponse team (RRT; called a med­ical emergency team or MET in the study) would reduce the incidence of cardiac arrest, death, and unplanned admissions to the ICU in the hospitals studied. To follow along as the EBP team addresses each of the essential elements of a well-conducted randomized controlled trial (RCT) and how they apply to the MERIT study, see their notes in Rapid Critical Appraisal of the MERIT Study.

ARE THE RESULTS OF THE STUDY VALID?

The first section of every RCA checklist addresses the validity of the study at hand—did the researchers use sound scientific methods to obtain their study results? Rebecca asks why validity is so important. Carlos replies that if the study's conclusion can be trusted—that is, relied upon to inform practice—the study must be conducted in a way that reduces bias or eliminates confounding variables (factors that influence how the intervention affects the outcome). Researchers typically use rigorous research methods to reduce the risk of bias. The purpose of the RCA checklist is to help the user determine whether or not rigorous methods have been used in the study under review, with most questions offering the option of a quick answer of “yes,” “no,” or “unknown.”

Rapid Critical Appraisal of the MERIT Study

1. Are the results of the study valid?

9781469813271_40-1_FIG.jpg

2. What are the results?

9781469813271_41-1_FIG.jpg

3. Will the results help me in caring for my patients?

9781469813271_42-1_FIG.jpg

Were the subjects randomly assigned to the intervention and control groups? Carlos explains that this is an important question when appraising RCTs. If a study calls itself an RCT but didn't randomly assign participants, then bias could be present. In appraising the MERIT study, the team discusses how the researchers randomly assigned entire hospitals, not individual patients, to the RRT intervention and control groups using a technique called cluster randomization. To better understand this method, the EBP team looks it up on the Internet and finds a PowerPoint presentation by a World Health Organization researcher that explains it in simplified terms: “Cluster randomized trials are experiments in which social units or clusters [in our case, hospitals] rather than individuals are randomly allocated to intervention groups.”2

Was random assignment concealed from the individuals enrolling the subjects? Concealment helps researchers reduce potential bias, preventing the person(s) enrolling participants from recruiting them into a study with enthusiasm if they're destined for the intervention group or with obvious indifference if they're intended for the control or comparison group. The EBP team sees that the MERIT trial used an independent statistician to conduct the random assignment after participants had already been enrolled in the study, which Carlos says meets the criteria for concealment.

Were the subjects and providers blind to the study group? Carlos notes that it would be difficult to blind participants or researchers to the intervention group in the MERIT study because the hospitals that were to initiate an RRT had to know it was happening. Rebecca and Chen wonder whether their “no” answer to this question makes the study findings invalid. Carlos says that a single “no” may or may not mean that the study findings are invalid. It's their job as clinicians interpreting the data to weigh each aspect of the study design. Therefore, if the answer to any validity question isn't affirmative, they must each ask themselves: does this “no” make the study findings untrustworthy to the extent that I don't feel comfortable using them in my practice?

Were reasons given to explain why subjects didn't complete the study? Carlos explains that sometimes participants leave a study before the end (something about the study or the participants themselves may prompt them to leave). If all or many of the participants leave for the same reason, this may lead to biased findings. Therefore, it's important to look for an explanation for why any subjects didn't complete a study. Since no hospitals dropped out of the MERIT study, this question is determined to be not applicable.

Were the follow-up assessments long enough to fully study the effects of the intervention? Chen asks Carlos why a time frame would be important in studying validity. He explains that researchers must ensure that the outcome is evaluated for a long enough period of time to show that the intervention indeed caused it. The researchers in the MERIT study conducted the RRT intervention for six months be­fore evaluating the outcomes. The team discusses how six months was likely adequate to determine how the RRT affected cardio­pulmonary arrest rates (CR) but might have been too short to establish the relationship between the RRT and hospital-wide mortality rates (HMR).

Were the subjects analyzed in the group to which they were randomly assigned? Rebecca sees the term intention-to-treat analysis in the study and says that it sounds like statistical language. Carlos confirms that it is; it means that the researchers kept the hospitals in their assigned groups when they ­con­ducted the analysis, a technique intended to reduce possible bias. Even though the MERIT study used this technique, Carlos notes that in the discussion section the authors offer some important caveats about how the study was conducted, including poor intervention implementation, which may have contributed to MERIT's unexpected findings.1

Was the control group appropriate? Carlos explains that it's challenging to establish an ap­propriate comparison or control group without an understanding of how the intervention will be implemented. In this case, it may be problematic that the intervention group received education and training in implementing the RRT and the control group received no comparable placebo (meaning education and training about something else). But Car­los reminds the team that the researchers attempted to control for known confounding variables by stratifying the sample on characteristics such as academic versus nonacademic hospitals, bed size, and other important parameters. This method helps to ensure equal representation of these parameters in both the intervention and control groups. However, a major concern for clinicians considering whether to use the MERIT findings in their decision making involves the control hospitals' code teams and how they may have functioned as RRTs, which introduces a potential confounder into the study that could possibly invalidate the findings.

Were the instruments used to measure the outcomes valid and reliable? The overall measure in the MERIT study is the composite of the individual outcomes: CR, HMR, and unplanned ad­missions to the ICU (UICUA). These parameters were defined reasonably and didn't include do not resuscitate (DNR) cases. Carlos explains that since DNR cases are more likely to code or die, including them in the HMR and CR would artificially increase these outcomes and introduce bias into the findings.

As the team moves through the questions in the RCA checklist, Rebecca wonders how she and Chen would manage this kind of appraisal on their own. Carlos assures them that they'll get better at recognizing well-­conducted research the more RCAs they do. Though Rebecca feels less than confident, she appreciates his encouragement nonetheless, and chooses to lead the team in discussion of the next question.

Were the demographics and baseline clinical variables of the subjects in each of the groups similar? Rebecca says that the intervention group and the control or comparison group need to be similar at the beginning of any intervention study because any differences in the groups could influence the outcome, potentially increasing the risk that the outcome might be unrelated to the intervention. She refers the team to their earlier discussion about confounding variables. Carlos tells Rebecca that her explanation was excellent. Chen remarks that Rebecca's focus on learning appears to be paying off.

WHAT ARE THE RESULTS?

As the team moves on to the second major question, Carlos tells them that many clinicians are ­apprehensive about interpreting statistics. He says that he didn't take courses in graduate school on conducting statistical analysis; rather, he learned about different statistical tests in courses that required students to look up how to interpret a statistic whenever they encountered it in the articles they were reading. Thus he had a context for how the statistic was being used and interpreted, what question the statistical analysis was answering, and what kind of data were being analyzed. He also learned to use a search engine, such as Google.com, to find an explanation for any statistical tests with which he was unfamiliar. Because his goal was to understand what the statistic meant clinically, he looked for simple Web sites with that same focus and avoided those with Greek symbols or extensive formulas that were mostly concerned with conducting statistical analysis.

How large is the intervention or treatment effect? As the team goes through the studies in their RCA, they decide to construct a list of statistics terminology for quick reference (see A Sampling of Statistics). The major statistic used in the MERIT study is the odds ratio (OR). The OR is used to provide insight into the measure of association between an intervention and an outcome. In the MERIT study, the control group did better than the intervention group, which is contrary to what was expected. Rebecca notes that the researchers discussed the possible reasons for this finding in the final section of the study. Carlos says that the authors' discussion about why their findings occurred is as important as the findings themselves. In this study, the discussion communicates to any clinicians considering initiating an RRT in their hospital that they should assess whether the current code team is already functioning as an RRT prior to RRT implementation.

How precise is the intervention or treatment? Chen wants to tackle the precision of the findings and starts with the OR for HMR, CR, and UICUA, each of which has a confidence interval (CI) that includes the number 1.0. In an EBP workshop, she learned that a 1.0 in a CI for OR means that the results aren't statistically significant, but she isn't sure what statistically sig­nificant means. Carlos explains that since the CIs for the OR of each of the three outcomes contains the number 1.0, these results could have been obtained by chance and therefore aren't statistically significant. For clinicians, chance findings aren't reliable findings, so they can't confidently be put into practice. Study findings that aren't statistically significant have a probability value (P value) of greater than 0.05. Statistically significant findings are those that aren't likely to be obtained by chance and have a P value of less than 0.05.

WILL THE RESULTS HELP ME IN CARING FOR MY PATIENTS?

The team is nearly finished with their checklist for RCTs. The third and last major question addresses the applicability of the study—how the findings can be used to help the patients the team cares for. Rebecca observes that it's easy to get caught up in the details of the research methods and findings and to forget about how they apply to real patients.

Were all clinically important outcomes measured? Chen says that she didn't see anything in the study about how much an RRT costs to initiate and how to compare that cost with the cost of one code or ICU admission. Carlos agrees that providing costs would have lent further insight into the results.

What are the risks and benefits of the treatment? Chen wonders how to answer this since the findings seem to be confounded by the fact that the control hospital had code teams that functioned as RRTs. She wonders if there was any consideration of the risks and benefits of initiating an RRT prior to beginning the study. Carlos says that the study doesn't directly mention it, but the consideration of the risks and benefits of an RRT is most likely what prompted the researchers to conduct the study. It's helpful to remember, he tells the team, that often the answer to these questions is more than just “yes” or “no.”

A Sampling of Statistics

9781469813271_46-1_FIG.jpg

9781469813271_47-1_FIG.jpg

Is the treatment feasible in my clinical setting? Carlos acknowledges that because the nursing administration is open to their project and supports it by providing time for the team to conduct its work, an RRT seems feasible in their clinical setting. The team discusses that nursing can't be the sole discipline involved in the project. They must consider how to include other disciplines as part of their next step (that is, the im­plementation plan). The team con­siders the feasibility of getting all disciplines on board and how to address several issues raised by the researchers in the discussion section (see Rapid Critical Appraisal of the MERIT Study), particularly if they find that the body of evidence indicates that an RRT does indeed reduce their chosen outcomes of CR, HMR, and UICUA.

What are my patients' and their families' values and expectations for the outcome and the treatment itself? Carlos asks Rebecca and Chen to discuss with their patients and their patients' families their opinion of an RRT and if they have any objections to the intervention. If there are objections, the patients or families will be asked to reveal them.

The EBP team finally completes the RCA checklists for the 15 studies and finds them all to be “keepers.” There are some studies in which the find­ings are less than reliable; in the case of MERIT, the team decides to include it anyway because it's considered a landmark study. All the studies they've retained have something to add to their understanding of the impact of an RRT on CR, HMR, and UICUA. Carlos says that now that they've ­determined the 15 studies to be somewhat valid and reliable, they can add the rest of the data to the evaluation table.

Be sure to join the EBP team for “Critical Appraisal of the Evidence: part III” in the next installment in the series, when Rebecca, Chen, and Carlos complete their synthesis of the 15 studies and determine what the body of evidence says about implementing an RRT in an acute care setting.

Ellen Fineout-Overholt is clinical pro­fessor and director of the Center for the Advancement of Evidence-Based Practice at Arizona State University in Phoenix, where Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Ellen Fineout-Overholt, [email protected].

REFERENCES

  1. Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet 2005;365, 2091-7.

  2. Wojdyla D. Cluster randomized trials and equivalence trials [PowerPoint presentation]. Geneva, Switzerland: Geneva Foundation for Medical Education and Research; 2005. http://www.gfmer.ch/PGC_RH_2005/pdf/Cluster_Randomized_Trials.pdf.

By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, Susan B. Stillwell,
DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN

Critical Appraisal of the Evidence: part III

The process of synthesis: seeing similarities and differences across the body of evidence.

This is the seventh article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

In September's evidence-­based practice (EBP) article, ­Rebecca R., our hypotheti­cal staff nurse, Carlos A., her hospital's expert EBP mentor, and Chen M., Rebecca's nurse colleague, ra­pidly critically appraised the 15 articles they found to answer their clinical question—“In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three-month period (T)?”—and determined that they were all “keepers.” The team now begins the process of evaluation and syn­­thesis of the articles to see what the evidence says about initiating a rapid response team (RRT) in their hospital. Carlos reminds them that evaluation and synthesis are synergistic processes and don't necessarily happen one after the other. Nevertheless, to help them learn, he will guide them through the EBP process one step at a time.

STARTING THE EVALUATION

Rebecca, Carlos, and Chen begin to work with the evaluation table they created earlier in this process when they found and filled in the essential elements of the 15 stud­ies and projects (see “Critical Ap­­praisal of the Evidence: part I,” July). Now each takes a stack of the “keeper” studies and systematically begins adding to the table any remaining data that best re­­flect the study elements pertaining to the group's clinical question (see Table 1; for the entire table with all 15 articles, go to http://links.lww.com/AJN/A17). They had agreed that a “Notes” section within the “Appraisal: Worth to Practice” column would be a good place to record the nuances of an article, their impressions of it, as well as any tips—such as what worked in calling an RRT—that could be used later when they write up their ideas for ini­tiating an RRT at their hospital, if the evidence points in that direction. Chen remarks that al­though she thought their ini­tial table contained a lot of information, this final version is more thorough by far. She appreciates the opportunity to go back and confirm her original understanding of the study essentials.

Table 1. Final Evaluation Table

9781469813271_7TT1_FIG.jpg

9781469813271_7TT1a_FIG.jpg

The team members discuss the evolving patterns as they complete the table. The three systematic reviews, which are higher-level evidence, seem to have an inherent bias in that they included only studies with control groups. In general, these studies weren't in favor of initiating an RRT. Carlos asks Rebecca and Chen whether, now that they've appraised all the evidence about RRTs, they're con­­fident in their decision to include all the studies and projects (in­­cluding the lower-level evidence) among the “keepers.” The nurses reply with an emphatic affirmative! They tell Carlos that the pro­­j­­ects and descriptive studies were what brought the issue to life for them. They realize that the higher-level evidence is somewhat in conflict with the lower-level evidence, but they're most interested in the conclusions that can be drawn from considering the entire body of evidence.

Rebecca and Chen admit they have issues with the systematic reviews, all of which include the MERIT study.1-4 In particular, they discuss how the authors of the systematic reviews made sure to report the MERIT study's finding that the RRT had no effect, but didn't emphasize the MERIT study authors' discussion about how their study methods may have ­influenced the reliability of the findings (for more, see “Critical Appraisal of the Evi­dence: part II,” Septem­ber). Carlos says that this is an excellent observation. He also ­reminds the team that clinicians may read a systematic review for the conclusion and never consider the original studies. He encourages Rebecca and Chen in their efforts to appraise the MERIT study and comments on how well they're putting the pieces of the evidence puzzle to­gether. The nurses are excited that they're able to use their new knowledge to shed light on the study. They discuss with Carlos how the interpretation of the MERIT study has perhaps con­tributed to a misunderstanding of the impact of RRTs.

Comparing the evidence. As the team enters the lower-level evi­dence into the evaluation table, they note that it's challenging to compare the project reports with studies that have clearly described methodology, measurement, anal­­ysis, and findings. Chen remarks that she wishes researchers and clinicians would write study and project reports similarly. Although each of the studies has a process or method determining how it was conducted, as well as how outcomes were measured, data were analyzed, and results interpreted, comparing the studies as they're currently written adds an­­other layer of complexity to the eval­uation. Carlos says that while it would be great to have studies and projects written in a similar for­mat so they're easier to compare, that's unlikely to happen. But he tells the team not to lose all hope, as a format has been de­veloped for re­porting quality improvement initiatives called the SQUIRE Guidelines; however, they aren't ideal. The team looks up the guide­lines online (www.squire-statement.org) and finds that the In­­stitute for Healthcare Improve­ment (IHI) as well as a good num­ber of journals have encouraged their use. When they review the actual guidelines, the team notices that they seem to be fo­­cused on research; for example, they require a research question and refer to the study of an intervention, whereas EBP projects have PICOT questions and apply evidence to practice. The team discusses that these guidelines can be confusing to the clinicians au­­thoring the reports on their proj­ects. In addition, they note that there's no mention of the synthesis of the body of ­evidence that should drive an ­evidence-based project. While the SQUIRE Guidelines are a step in the right direction for the future, Carlos, Rebecca, and Chen conclude that, for now, they'll need to learn to read these studies as they find them—looking carefully for the details that inform their clinical question.

Once the data have been entered into the table, Carlos suggests that they take each column, one by one, and note the similarities and differences across the studies and projects. After they've briefly looked over the columns, he asks the team which ones they think they should focus on to answer their question. Re­becca and Chen choose “Design/­Method,” “Sample/Setting,” “Findings,” and “Appraisal: Worth to Practice” (see Table 1) as the ini­tial ones to consider. Carlos agrees that these are the columns in which they're most likely to find the most pertinent information for their syn­thesis.

SYNTHESIZING: MAKING DECISIONS BASED ON THE EVIDENCE

Design/Method. The team starts with the “Design/Method” column because Carlos reminds them that it's important to note each study's level of evidence. He suggests that they take this information and create a synthesis table (one in which data are extracted from the evaluation table to better see the similarities and differences bet­ween studies) (see Table 21-15). The synthesis table makes it clear that there is less higher-level and more lower-level evidence, which will impact the reliability of the overall findings. As the team noted, the higher-level evidence is not without meth­odological issues, which will increase the challenge of coming to a conclusion about the impact of an RRT on the out­­comes.

Sample/Setting. In reviewing the “Sample/Setting” column, the group notes that the number of hospital beds ranged from 218 to 662 across the studies. There were several types of hospitals represented (4 teaching, 4 community, 4 no mention, 2 acute care hospitals, and 1 public hospital). The evidence they've collected seems applicable, since their hospital is a community hos­pital.

Findings. To help the team better discuss the evidence, Car­los suggests that they refer to all pro­­j­­ects or studies as “the body of evidence.” They don't want to get confused by calling them all studies, as they aren't, but at the same time continually referring to “stud­ies and projects” is cumbersome. He goes on to say that, as part of the synthesis process, it's impor­tant for the group to determine the overall impact of the intervention across the body of evi­dence. He helps them create a second synthesis table containing the findings of each study or pro­ject (see Table 31-15). As they look over the results, Rebecca and Chen note that RRTs reduce code rates, ­par­ti­cularly outside the ICU, whereas unplanned ICU ­admissions (UICUA) don't seem to be as affected by them. How­­ever, 10 of the 15 studies and projects reviewed didn't ­ev­aluate this outcome, so it may not be fair to write it off just yet.

Table 2. The 15 Studies: Levels and Types of Evidence

9781469813271_7TT2_FIG.jpg

Table 3. Effect of the Rapid Response Team on Outcomes

9781469813271_7TT3_FIG.jpg

The EBP team can tell from reading the evidence that research­­ers consider the impact of an RRT on hospital-wide mortality rates (HMR) as the more important outcome; however, the group re­­mains unconvinced that this outcome is the best for evaluating the purpose of an RRT, which, according to the IHI, is early in­­tervention in patients who are unstable or at risk for cardiac or respiratory arrest.16 That said, of the 11 studies and projects that evaluated mortality, more than half found that an RRT reduced it. Carlos reminds the group that four of those six articles are level-VI evidence and that some weren't research. The findings produced at this level of evidence are typically less reliable than those at higher levels of evidence; however, Carlos notes that two articles hav­ing level-­VI evidence, a study and a project, had statistically significant (less likely to occur by chance, P < 0.05) reductions in HMR, which in­­creases the reliability of the results.

Chen asks, since four level-VI reports documented that an RRT reduces HMR, should they put more confidence in findings that occur more than once? Carlos replies that it's not the number of studies or projects that determines the re­­liability of their findings, but the uniformity and quality of their methods. He recites something he heard in his Expert EBP Mentor program that helped to clarify the concept of making decisions based on the evidence: the level of the evidence (the design) plus the quality of the evidence (the validity of the methods) equals the strength of the evidence, which is what leads clinicians to act in con­­fidence and apply the evidence (or not) to their practice and expect similar findings (outcomes). In terms of making a decision about whether or not to initiate an RRT, Carlos says that their evidence stacks up: first, the MERIT study's results are questionable because of problems with the study methods, and this affects the reliability of the three systematic reviews as well as the MERIT study it­­self; second, the reasonably conducted lower-level studies/projects, with their statistically significant findings, are persuasive. Therefore, the team begins to consider the possibility that initiating an RRT may re­­duce code rates outside the ICU (CRO) and may impact non-ICU mor­­tality; both are outcomes they would like to address. The evidence doesn't provide equally promising results for UICUA, but the team agrees to include it in the outcomes for their RRT project be­cause it wasn't evaluated in most of the articles they appraised.

As the EBP team continues to discusses probable outcomes, Rebecca points to one study's data in the “Findings” column that shows a financial return on investment for an RRT.9 Carlos remarks to the group that this is only one study, and that they'll need to make sure to collect data on the costs of their RRT as well as the cost implications of the outcomes. They determine that the important outcomes to measure are: CRO, non-ICU mortality (excluding patients with do not resuscitate [DNR] orders), UICUA, and cost.

Appraisal: Worth to Practice. As the team discusses their synthesis and the decision they'll make based on the evidence, Re­becca raises a question that's been on her mind. She reminds them that in the “Appraisal: Worth to Practice” column, teaching was identified as an important factor in initiating an RRT and expresses concern that their hospital is not an aca­­demic medical center. Chen re­­minds her that even though theirs is not a designated teaching hospital with residents on staff 24 hours a day, it has a culture of teaching that should enhance the success of an RRT. She adds that she's al­ready hearing a buzz of excitement about their project, that their colleagues across all disciplines have been eager to hear the re­­sults of their review of the evidence. In addition, Carlos says that many re­­sources in their hospital will be available to help them get started with their project and reminds them of their hospital administrators' commitment to support the team.

ACTING ON THE EVIDENCE

As they consider the synthesis of the evidence, the team agrees that an RRT is a valuable intervention to initiate. They decide to take the criteria for activating an RRT from several successful studies/projects and put them into a synthesis table to better see their ma­­jor similarities (see Table 44, 8, 9, 13, 15). From this combined list, they choose the criteria for initiating an RRT consult that they'll use in their project (see Table 5). The team also ­begins discussing the ideal make up for their RRT. Again, they go back to the evaluation table and look over the “Major Variables Studied” column, noting that the composition of the RRT varied among the studies/projects. Some RRTs had active physician participation (n = 6), some had designated phy­sician consultation on an as-needed basis (n = 2), and some were nurse-led teams (n = 4). Most RRTs also had a respiratory therapist (RT). All RRT mem­­bers had expertise in intensive care and many were certified in ad­­vanced cardiac life support (ACLS). They agree that their team will be comprised of ACLS-certified mem­bers. It will be led by an acute care nurse prac­­ti­tioner (ACNP) credentialed for advanced procedures, such as cen­tral line insertion. Members will include an ICU RN and an RT who can intubate. They also discuss having physicians will­ing to be called when needed. Although no studies or projects had a chaplain on their RRT, Chen says that it would make sense in their hospital. Carlos, who's been on staff the longest of the three, says that interdisciplinary collaboration has been a mainstay of their organization. A physician, ACNP, ICU RN, RT, and chaplain are logical choices for their RRT.

Table 4. Defined Criteria for Initiating an RRT Consult

9781469813271_7TT4_FIG.jpg

9781469813271_7TT4a_FIG.jpg

Table 5. Defined Criteria for Initiating an RRT Consult at Our Hospital

9781469813271_7TT5_FIG.jpg

As the team ponders the evidence, they begin to discuss the next step, which is to develop ideas for writing their project ­im­­plementation plan (also called a protocol). Included in this protocol will be an educational plan to let those involved in the project know information such as the evidence that led to the project, how to call an RRT, and outcome measures that will indicate whether or not the implementation of the evidence was successful. They'll also need an evaluation plan. From reviewing the studies and projects, they also re­­alize that it's important to focus their plan on evidence implementation, in­cluding carefully evaluating both the process of implementation and project outcomes.

Be sure to join the EBP team in the next installment of this se­­ries as they develop their implementation plan for initiating an RRT in their hospital, including the submission of their project proposal to the ethics review board.

Ellen Fineout-Overholt is clinical professor and director of the Center for the Advancement of Evidence-Based Prac­­tice at Arizona State University in Phoe­­nix, where Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing, Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Men­­torship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Pra­­ctice. Contact author: Ellen Fineout-Overholt, [email protected].

REFERENCES

  1. Chan PS, et al. (2010). Rapid re­­sponse teams: a systematic review and meta-­analysis. Arch Intern Med 2010;170(1):18-26.

  2. McGaughey J, et al. Outreach and early warning systems (EWS) for the prevention of intensive care admission and death of critically ill adult patients on general hospital wards. Cochrane Database Syst Rev 2007;3:CD005529.

  3. Winters BD, et al. Rapid response sys­­tems: a systematic review. Crit Care Med 2007;35(5):1238-43.

  4. Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet 2005;365(9477):2091-7.

  5. Sharek PJ, et al. Effect of a rapid re­­sponse team on hospital-wide mortality and code rates outside the ICU in a children's hospital. JAMA 2007;298(19):2267-74.

  6. Chan PS, et al. Hospital-wide code rates and mortality before and after implementation of a rapid response team. JAMA 2008;300(21):2506-13.

  7. DeVita MA, et al. Use of medical emergency team responses to reduce hospital cardiopulmonary arrests. Qual Saf Health Care 2004;13(4):251-4.

  8. Mailey J, et al. Reducing hospital standardized mortality rate with early interventions. J Trauma Nurs 2006;13(4):178-82.

  9. Dacey MJ, et al. The effect of a rapid response team on major clinical outcome measures in a community hospital. Crit Care Med 2007;35(9):2076-82.

10. McFarlan SJ, Hensley S. Implementation and outcomes of a rapid response team. J Nurs Care Qual 2007;22(4):307-13.

11. Offner PJ, et al. Implementation of a rapid response team decreases ­cardiac arrest outside the intensive care unit. J Trauma 2007;62(5):1223-8.

12. Bertaut Y, et al. Implementing a rapid-­response team using a nurse-to-nurse consult approach. J Vasc Nurs 2008;26(2):37-42.

13. Benson L, et al. Using an advanced practice nursing model for a rapid re­­sp­­onse team. Jt Comm J Qual Pa­­tient Saf 2008;34(12):743-7.

14. Hatler C, et al. Implementing a rapid response team to decrease emergencies. Medsurg Nurs 2009;18(2):84-90,126.

15. Bader MK, et al. Rescue me: saving the vulnerable non-ICU patient population. Jt Comm J Qual Patient Saf 2009;35(4):199-205.

16. Institute for Healthcare Improvement. Establish a rapid response team. n.d. http://www.ihi.org/IHI/topics/criticalcare/intensivecare/changes/establisharapidresponseteam.htm.

Part III: Implementing the Evidence

By Ellen Fineout-Overholt, PhD, RN, FNAP,
FAAN, Kathleen M. Williamson, PhD, RN,
Lynn Gallagher-Ford, RN, MSN, NE-BC,
Bernadette Mazurek Melnyk, PhD, RN,
CPNP/PMHNP, FNAP, FAAN, and Susan B.
Stillwell, DNP, RN, CNE

Following the Evidence: Planning for Sustainable Change

The EBP team makes plans to implement an RRT in their hospital.

This is the eighth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

After the evidence-based practice (EBP) team of Rebecca R., Carlos A., and Chen M. synthesized and appraised the evidence they found to answer their clinical question, they concluded that rapid response teams (RRTs) were effective in reducing both code rates outside the ICU (CRO) and non-ICU mortality (NIM), excluding patients with do not resuscitate (DNR) orders (see “Clinical Appraisal of the Evidence: part III,” November 2010). They also decided that a reduction in unplanned ICU admissions (UICUA) may be a reasonable outcome to expect. In addition, they chose the members of their RRT: an advanced practice nurse, a phy­sician, an ICU staff nurse, a respi­ratory therapist, and a chaplain.

The team's next step is to de­velop a plan to implement an RRT in their hospital. They be­gin by planning how to collect baseline data on their chosen outcomes so they can evaluate the RRT's impact on those outcomes. Carlos explains to the team that measuring outcomes, typically before and after implementing an intervention, is essential to documenting the impact of the EBP implementation project on health care quality and/or patient outcomes.1 Rebecca adds that they'll also need to consider cost as an outcome and must plan for how to capture the costs of the RRT as well as evaluate the cost savings for positive changes in CRO, NIM, and UICUA.

THE IMPLEMENTATION PLAN

Rebecca and Chen are excited about the plan to implement an RRT in their hospital and tell Carlos how much they appreci­ate his ongoing support. Carlos checks in often with the team ­now that the project is under way. His experience as an expert EBP ­mentor has taught him the importance of assessing the team's progress at frequent intervals to see how he can support them.

To help the team develop a ­detailed plan for implementing an RRT in their hospital, Carlos pro­vides them with an EBP ­Implementation Plan template that he used in his EBP Gradu­­ate Certificate Program (Figure 1). This plan was developed using the Advancing Research and Clin­­i­cal Practice Through Close Collaboration (ARCC) model, in which EBP mentors are key ­fa­cilitators of sustainable change. Carlos explains that even though they now have a template to guide them in the process, EBP implementation can be unpredictable. The team cannot antic­ipate all of the challenges or organizational nuances they may encounter in launching an RRT in their hospital.

Figure 1. EBP Implementation Plan Template

9781469813271_8FF1_FIG.jpg

9781469813271_8FF1a_FIG.jpg

9781469813271_8FF1b_FIG.jpg

Preliminary checkpoint catch-up. The team reviews the template, beginning with the Preliminary Checkpoint, to determine which steps they've already taken and which they'll need to prepare ­for going forward. They've already completed checkpoints one through four, but two steps in ­the preliminary checkpoint still need to be addressed: identifying key stake­holders and acquiring approval from the internal review board (IRB; sometimes called the ethics review board, or the human subjects or ethics committee). The team members discuss their roles in the project and agree that these may evolve ­as the implementation plan develops.

Key stakeholders. Carlos tells Rebecca and Chen that considering who would be stakeholders in a project—in this case, those individuals or groups that may be affected by or can influence the implementation of an RRT—is a step that's often overlooked. He explains that active stakeholders are those people who have a key role in making the project happen. Passive stakeholders are those who may not be actively involved in the project but who could promote or stymie its success. Carlos advises the team to consider all potential stakeholders, as theirs is an organization-wide project and some stakeholders may not be ob­vious. He asks Rebecca and Chen to think about the outcomes of the project and to which stakeholders throughout the hospital they'd be important. The team discusses that, as staff nurses, they don't always think about their work from an organizational standpoint. Carlos says that thinking about the project in an organization-wide context will help them figure out who needs to be on the team. He provides examples of stakeholders who would not only be critical to the RRT process but who might also have connections that could be important to the project's success. For example, connecting with key councils (practice, quality, criti­cal care) or work groups (education, communications) may provide ac­­cess to already-­established processes for introduc­ing a policy into the organization.

The team preliminarily identifies the members of their RRT, patients, staff nurses, and administrators as active stakeholders. They identify the finance, risk management, and education departments, mid-level managers, and the chief executive and chief nursing officers as potential passive stakeholders. The team agrees that although these may not be all of the stakeholders—more may be identified as planning continues—they're likely key players who need to be included in the implementation plan for now. Carlos tells the team that it's important to keep thinking about who will impact the project and whom the project will impact, so that everyone who needs to be on board with the plan is brought on early.

IRB approval. Carlos explains that an IRB is charged with making sure that subjects involved ­in a research study are safe and that the research is conducted in such a way that the findings are applicable to a broader population than just those in the study, which ­is known as generalizabil­ity.2 ­The team discusses whether they need to submit their imple­men­tation plan to their hospital's IRB for approval, since they're not ­conducting research. Although they'll be collecting outcomes data to evaluate whether they're achiev­ing the expected outcomes cited in the literature, their evidence-based RRT intervention is a best practice improve­ment project, ­not a research study. Still, Car­los stresses that the team has an ­obligation to publish how their ­evidence-based intervention works in their hospital. He re­minds them that the seventh step in the EBP process is to disseminate results so others can learn how a project was implemented and eval­­uated (the process) and whether the out­comes identified in the literature were obtained (the pro­ject outcomes, or end points) (see “The Seven Steps of Evidence-Based Practice,” January 2010). Car­­los tells Rebecca and Chen that if they're going to publish their pro­ject, they'll need to submit their implementation plan for IRB approval. Moreover, they cannot collect their baseline data without prior IRB approval. The team dis­cusses that when they write up their project, they can address some of the issues they had with the reporting of implementa­tion projects in the literature, such as how differences in the formatting of these reports makes it hard to synthesize the data (see “Clinical Appraisal of the Evidence: part III,” November 2010). For these reasons, the team feels it's essential that they publish their project, so they'll pursue IRB approval.

Before the team begins writ­ing up their implementation plan (which they will reformulate as an IRB proposal), they discuss an essential assumption they hold, which is that all patients who enter a hospital sign a “consent for treatment” expecting clinicians and others caring for them to pro­vide the best care possible. Although patients may not re­fer to their care as evidence-based practice, the EBP team feels strongly that patients' expectations reflect professional practice in which daily decisions are made based on the best evidence available. With this expectation and their decision to publish the project in mind, the team discusses that the outcomes data will be used in a way that wasn't covered in the consent for treatment. Thus, the IRB review of their proposal should reveal any ways in which publishing the outcomes of the project could put recipients of the practice change at risk. In effect, the IRB would be reviewing the plan to make sure that the data from those patients who receive the intervention will be treated confidentially.

The team discusses that their RRT intervention is supported by studies of RRTs that were sub­mitted to and approved by their respective IRBs; that the IRB ap­prov­als of these RRT projects lends confidence to their intervention. Rebecca and Chen know it's important that their plan be reviewed, but they express concern about how to engage the IRB process. Carlos tells them that the IRB has several forms available to assist clinicians and researchers in pinpointing those aspects of their study or project that may increase risk of any kind to the people involved. The team seeks out more information on their hospital's Web site and finds the appropriate form for an implementation project. They agree to complete the form together as they develop their implementation plan.

Checkpoint five and forward. As the team moves on to Checkpoint Five in the EBP Implementation Plan template, Carlos talks to them about the critical importance of defining the purpose of the project.

Purpose of the project. A clearly defined purpose sets the entire plan­ning process in motion, Carlos says; it's the touchstone of the project that the team can return to periodically to ensure they're on course. The team agrees that the purpose of their project is to implement and evaluate the effective­ness of an RRT in their hospital.

Baseline data collection. Carlos tells the team that collecting data prior to implementation of the RRT is important because it will help determine the extent of any already existing problems ­as well as enable the evaluation of the project outcomes.3 He explains that various data are generated within the hospital, which he calls internal evidence. The sources for these data are in various locations and are referred to in a variety of ways, such as: qual­ity management, risk management, finance, and human resources departments; clinical systems; operational systems; and electronic medical records/information tech­nology (see Table 1). Carlos tells the team that internal evidence that's collected for federal and state agencies or for regulatory and specialty organizations, such as the American Nurses Credentialing Center's Magnet Recognition Program, can also be used as outcomes. As an example, he pro­vides reports from their hospital's quality commit­tee that include data for CRO, UICUA, and overall hospital mor­tality. Chen ­asks what it will require to get data only for NIM. Carlos replies ­that he'll have to find out which depart­ment in the hospital creates quality committee reports and ask if NIM data can be culled from the overall hospital mortality data. He explains that there are many data repository systems within the hospital and that each system may collect different data and may require a different way of requesting those data. Carlos helps the team understand that obtaining data may be complicated at times, but one's success greatly de­pends on knowing whom to ask.

Table 1. Potential Sources and Types of Internal Evidence

9781469813271_8TT1_FIG.jpg

To help the team capture the out­comes data they'll need to obtain at baseline and again after the project, Carlos recommends they work with the information technology and finance departments. Chen asks if putting the outcomes in a chart would help to clearly outline the “who, what, when, where, and how” of baseline data collection. The team agrees that this would help them understand the financial outcomes (sometimes referred to as the busi­ness case), the process and structure of the project,4 and the patient outcomes that will be measured at the end of the project (see Table 2).

The process. The team discus­ses how to ensure that the process of implementing an RRT in their hospital goes well. Rebecca reminds the team about their and the MERIT trial authors' observations on how the MERIT trial was conducted, particularly on how the RRT protocol was imple­mented.5 (The control hospitals' code teams may have functioned as RRTs, which could ­explain why there was no difference between the control group and the intervention group; see “Critical Appraisal of the Evidence, part II,” September 2010). She asks the group for ideas about how they can collect data on the process of implementing the RRT to demonstrate that they have done it well. Carlos says that how well they implement the intervention is called the fidelity of the intervention. He recommends keeping good notes on the work being done. They talk about the need to develop a project data collection tool that staff can use when calling the RRT. Chen volunteers to develop this form, using similar forms in the literature they reviewed as a basis. Carlos suggests that maybe Chen should see if anything new has been published, since it's been a few months since they completed their literature search.

Table 2. Considerations in Measuring Outcomes for the RRT Implementation Project

9781469813271_8TT2_FIG.jpg

9781469813271_8TT2a_FIG.jpg

The team talks about the importance of measuring the costs and benefits of the RRT, especially its benefits divided by the costs, which Carlos notes is called its return on investment (ROI). Carlos suggests that the team meet with the finance department to dis­­cuss their plan to measure the costs and ROI of an RRT. Rebecca volunteers to be responsible for ob­tain­­ing the finan­cial data and requests that Carlos be available for support, if needed, to which he read­­ily agrees. Chen agrees to work with Carlos to ensure that data on CRO, UICUA, and NIM are systematically collected and to focus on the process outcomes (how well the RRT pro­ject is implemented). For example, if there was a breach in protocol implementation—in how well the RRT protocol was delivered to the active stakeholders, for instance—that breach could lead to an outcome that was different from what was expected. This un­expected outcome may not be because the RRT intervention didn't work, but because of a glitch in the process: the RRT pro­tocol wasn't delivered as planned.

As work on the project is plan­ned and discussed, the roles of the team naturally begin to fall into place. As part of formulating the implementation plan, they discuss what questions about data collection they'll need to ask in order to measure their outcomes of CRO, UICUA, and NIM (see Questions to Ask in Preparation for Data Collection). Carlos reflects back on the definitions and measures the team discussed in their appraisal of the evidence and how the different definitions of mortality (whether it included DNR cases, for example) led to some confusion about comparing the impact of an RRT on that variable (see “Critical Appraisal of the Evi­dence: part II,” September 2010). He explains the importance of how the data are measured (what mechanisms are used, for example, and why and how to know they're good methods for measuring the data). He says that in order to determine the impact of an EBP project such as the implementation of an RRT, the data must be measurable (able to be counted), accessible (the team has access to the data), and user friendly (understandable and able to be used without difficulty). Chen and Rebecca decide they want to create a data collection plan that meets all of these criteria. With the questions on data collection to guide them, they realize that multiple disciplines within the hospital (not only nursing) will be involved in helping to collect the baseline data for the pro­ject.

Questions to Ask in Preparation for Data Collection

• How are the outcomes defined?

• What data will be used to measure the outcomes?

• Who “owns” the data needed for this project?

• Who will (or already does) generate the data needed for the project?

• What special clearances are required to access the data?

• What are the restrictions for sharing these data?

• Who will be responsible for collecting the data?

• When will the data be collected?

• Where are the data located in the hospital?

• How will the evidence-based practice (EBP) team access the data?

• How will the EBP team store the data?

• What program will the EBP team use to analyze the data?

• Who will help the EBP team with data analysis?

• How will the EBP team manage the data (data entry, cleaning, labeling)?

From the team's discussion, ­Rebecca and Chen put together a preliminary plan for evaluating the RRT project, keeping the following key areas in mind: the strategic case, business case, resources case, and process measures (see Table 2). They also add the fol­low­ing process outcomes to their plan: the number of staff educated on the RRT, the number of RRT calls, the primary reasons for calling an RRT, and fam­ily and staff satisfaction with the RRT process.

In the March column, join ­Rebecca, Chen, and Carlos as they move through the next several steps of the EBP implementation process, including identifying and planning for the barriers they may encounter as the EBP change is rolled out, as well as providing system-wide education on the in­tended use and expected outcomes of an RRT.

Ellen Fineout-Overholt is clinical pro­fessor and director of the Center for the Advancement of Evidence-Based Practice (CAEP) at Arizona State University in Phoenix, where Lynn Gallagher-Ford is assistant director, Susan B. Stillwell is associate director, and Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing at the College of Nursing and Health Innovation. Kathleen M. Williamson is former associate director of the CAEP. Contact author: Ellen Fineout-Overholt, [email protected].

REFERENCES

1. Fineout-Overholt E, Johnston L. Teaching EBP: Implementation of ­evidence: moving from the evidence to action. Worldviews Evid Based Nurs 2006;3(4):194-200.

2. Department of Health and Human Services. 45 CFR 46.101 Public welfare. Protection of human subjects; 2009. http://www.hhs.gov/ohrp/humansubjects/guidance/45cfr46.htm#46.101.

3. Melnyk BM, Fineout-Overholt E, Stillwell SB, Williamson K. Transforming healthcare quality through innovations in evidence-based practice. In: Porter-O'Grady T, Malloch K, editors. Innovation leadership: creating the landscape of health care. 2nd ed. Sudbury, MA: Jones and Bartlett; 2010. p. 167-94.

4. Wyszewianski L. Basic concepts ­of healthcare quality. In: Ransom ER, et al., editors. The healthcare quality book. 2nd ed. Chicago: Health Administration Press; 2008. p. 25-42.

5. Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet 2005;365:2091-7.

By Lynn Gallagher-Ford, MSN, RN,
NE-BC, Ellen ­Fineout-Overholt, PhD,
RN, FNAP, FAAN, Bernadette Mazurek
Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, and Susan B. Stillwell,
DNP, RN, CNE

Implementing an Evidence-Based Practice Change

Beginning the transformation from an idea to reality.

This is the ninth article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

In January's evidence-based prac­­tice (EBP) article, Rebe­­cca R., our hypothetical staff nurse, Carlos A., her hospital's ex­­pert EBP mentor, and Chen M., Rebecca's nurse colleague, began to develop their plan for implementing a rapid response team (RRT) at their institution. They clearly identified the purpose of their RRT project, the key stakeholders, and the various outcomes to be measured, and they learned their internal ­re­­view board's requirements for ­re­­viewing their pro­­posal. To determine their next steps, the team consults their EBP Implementation Plan (see Figure 1 in “Following the Evidence: Plan­­ning for Sustainable Change,” Jan­­uary). They'll be working on items in checkpoints six and seven: specif­­ically, engaging the stakeholders, getting administrative support, and preparing for and conducting the stakeholder kick-off meeting.

Strategies to Engage Stakeholders

• Spend time and effort building trust.

• Understand stakeholders' interests.

• Solicit input from stakeholders.

• Connect in a collaborative way.

• Promote active engagement in establishing metrics and outcomes to be measured.

ENGAGING THE STAKEHOLDERS

Carlos, Rebecca, and Chen reach out to the key stakeholders to tell them about the RRT project by meeting with them in their offices or calling them on the phone. Car­­los leads the team through a discussion of strategies to promote success in this critical step in the implementation process (see Strat­­egies to Engage Stakeholders). One of the strategies, connect in a collaborative way, seems espe­­cially applicable to this project. Each team member is able to meet with a stakeholder in person, fill them in on the RRT project, describe the purpose of an RRT, discuss their role in the project, and an­­swer any questions. They also tell each stakeholder about the initial project meeting to be held in a few weeks.

In anticipation of the stakeholder kick-off meeting, Carlos and the team discuss the fun­­damen­tals of preparing for an im­­portant meeting, such as how to set up an agenda, draft key doc­uments, and conduct the meet­­ing. They begin to discuss a time and date for the meeting. Carlos suggests that Rebecca and Chen meet with their nurse manager to up­­date her on the project's pro­­gress and request her help in sched­­uling the meeting.

SECURING ADMINISTRATIVE SUPPORT

After Rebecca updates her manager, Pat M., on the RRT pro­­ject, Pat says she's impressed by the team's work to date and of­­fers to help them move the project forward. She suggests that, since they've already invited the stakeholders to the upcoming meet­­ing, they use e-mail to communicate the meeting's time, date, and place. As they draft this e-mail together, Pat shares the follow­­ing tips to im­­prove its effectiveness:

• communicate the essence and importance of the e-mail in the subject line

• write an e-mail that's engaging, but brief and to the point

• introduce yourself

• explain the project

• welcome the recipients to the project and/or team and invite them to the meeting

• explain why their attendance is critical

• request that they read certain materials prior to the meeting (and attach those documents to the e-mail)

• let them know whom to contact with questions

• request that they RSVP

• thank them for their participation

Before they send the e-mail (see Sample E-mail to RRT and Stakeholders), the team wants to make sure they don't miss anyone, so they review and include all of the RRT members and stake­­holders. They realize that it's im­­portant to invite the manager of each of the stakeholders and disciplines rep­resented on the RRT and ask them to also bring a staff representative to the meeting. In addition, they copy the administrative di­­rec­­tors of the stakeholder departments on the e-mail to en­sure that they're fully aware of the project.

PREPARING FOR THE KICK-OFF MEETING

The group determines that the draft documents they'll need to prepare for the stakeholder kick-off meeting are:

• an agenda for the meeting

• the RRT protocol

• an outcomes measurement plan

• an education plan

• an implementation timeline

• a projected budget

To expedite completion of the doc­uments, the team divides them up among themselves. Chen volunteers to draft the RRT protocol and outcomes measurement plan. Carlos assures her that he'll guide her through each step. Rebecca decides to partner with her unit ed­ucator to draft the education plan. Carlos agrees to take the lead in drafting the meeting agenda, im­­plementation timeline, and projected budget, but says that since this is a great learning opportunity, he wants Rebecca and Chen to be part of the drafting process.

Drafting documents. Carlos tells the team that the purpose of a draft is to initiate discussion and give the stakeholders an oppor­tu­­nity to have input into the final prod­­uct. All feedback is a positive sign of the stakeholders' involvement, he says, and shouldn't be per­­ceived as criticism. Carlos also offers to look for any templates from other EBP projects that may be helpful in drafting the documents. He tells Rebecca and Chen that he's confident they'll do a great job and shares his ex­­cite­­ment at how the team has pro­gressed in planning an EBP practice change.

RRT protocol. Chen starts to draft the RRT protocol using one of the hospital's protocols as a tem­­plate for the format, as well as definitions and examples of protocols, policies, and procedures from other organizations and the literature. She returns to the articles from the team's original literature search (see “Critical Appraisal of the Evidence: part I,” July 2010) to see if there is information, previously appraised, that will be helpful in this current step in the process. She recalls that the team had set aside some articles be­­cause they didn't directly an­­swer the PICOT question about whether to implement an RRT, but they did have valuable information on how to implement an RRT. In reviewing these articles, Chen selects one that's a review of the literature, though not a ­sys­­tematic review, that includes many examples of RRT membership rosters and protocols used in other hospitals, and which will be ­help­ful in drafting her RRT protocol document.1 Chen includes this ex­­pert opinion ar­ticle be­cause the informa­­tion it contains is consistent with the higher-level evidence already being used in the project. Using both higher and lower levels of evidence, when appropriate, al­­lows the team to use the best infor­­mation available in formulating their RRT protocol.

Sample E-mail to RRT and Stakeholders

To: ICU Nurse Manager, 3 North Nurse Manager, Respiratory Therapy Director, Medical Director of ICU, Director of Acute Care NP Hospitalists, Director of Spirituality Department

cc: EBP Council Chair, VP Nursing, VP Medical Affairs, ICU Nursing Director, Medical–Surgical Nursing Director, Finance Department Director, Communications Department Director, Risk Management Director, Education Department Director, HIMS (Medical Records) Director, Quality/Performance Improvement Director, Clinical Informatics Director, Pharmacy Director

Subject: Invitation to the Rapid Response Project Stakeholder Kick-off Meeting

Good afternoon. I would like to introduce myself. My name is Rebecca R. I am a staff nurse III on the 3 North medical–surgical unit. You have either spoken with me or with one of my colleagues, Carlos A. or Chen M., about an important evidence-based initiative that will help improve the quality of care for our patients. The increasing patient acuity on our unit and throughout the hospital, and the frequent need for patients to be transferred to the ICU, prompted us to ask important questions about patient outcomes. For the past few months, Carlos, Chen, and I have been investigating how our hospital can reduce the number of codes, particularly outside the ICU. We have conducted a thorough search for and appraisal of current available evidence, which we would like to share with you.

Our team and our managers would like to invite you to participate in a kick-off meeting to discuss an exciting evidence-based initiative to improve the quality of patient care in our hospital. The meeting will be held on March 1, 2011, at 10 AM in the Innovation Conference Room on the 2nd floor. It is very important that you attend this meeting as you have been identified as a critical participant in this project. We need your input and support as we move forward. So please plan to attend the meeting or send a representative. To ensure that we have sufficient materials for the meeting, please RSVP to Mary J., unit secretary on 3 North.

I want to thank you in advance for your help with and support of this project. I look forward to seeing you at the meeting. If you have any questions, please feel free to contact me or any of the RRT project team members.

Rebecca R. and the RRT Project Team

As she writes, Chen discovers that their hospital's protocols and other practice documents don't in­­clude a section on supporting evidence. Knowing that evidence is critically important to the RRT pro­­tocol, she discusses this with the clinical practice council representative from her unit who advises her to add the section to her draft document. He promises to present this issue at the next coun­­cil meet­­ing and obtain the council's ap­­proval to add an evidence section to all future practice documents. Chen reviews the finished product before she submits it for the team's review (see RRT Protocol Draft for Review1-10).

Outcomes measurement plan. Based on the appraised evidence and the many discussions Rebe­­cca and Chen have had about it, Chen drafts a document that lists the outcomes the team will measure to demonstrate the success of their project, where they'll ob­­tain this information, and who will gather it (see Table 1). In draf­­ting this plan, Chen realizes that they don't have all the information they need, and she's concerned that they're not ready to move for­­ward with the stakeholder kick-­off meeting. But when Chen calls Carlos and shares her concern, Car­­los reminds her that the document is a draft and that the re­­quired information will be ad­­dressed at the meeting.

Education plan. Rebecca reaches out to Susan B., the clin­ical educator on her unit, and requests her help in drafting the education plan. Susan tells Rebe­­cca how much she enjoys the op­­portunity to work collaboratively with staff nurses on education pro­­jects and how happy she is to see an EBP project being implemented. Rebecca shares her RRT project folder (containing all the informa­­tion relative to the pro­ject) with Susan, focusing on the education about the project she thinks the staff will need. Susan commends the team for its efforts, as a good deal of the necessary work is al­­­­ready done. She asks Rebecca to clarify both the ultimate goal of the project and what's most im­­por­­tant to the team about its rollout on the unit. Rebecca thoughtfully responds that the ­ultimate goal is to ensure that ­patients re­­ceive the best care possible. What's most im­­portant about its rollout is that the staff sees the value of an RRT to the patients and its positive impact on their own workload. She adds that it's im­­portant to her that the project be conducted in a way that feels pos­itive to the staff as they work to­­ward sustain­able changes in their practices.

Susan and Rebecca discuss which clinicians will need edu­­­cation on the RRT. They plan to use a variety of mechanisms, in­­clud­­ing in-services, e-mails, newsletters, and flyers. From their conversation, Susan agrees to draft an education plan using a template she developed for this purpose. The template prompts her to put in key elements for planning an education program: learner objectives, key content, methodology, faculty, materials, time frame, and room location. Susan fills the template with information Rebecca has given her, adding information she knows already from her expe­rience as an educator. When Rebecca and Susan meet to re­view the plan, Rebecca is amazed to see how their earlier conversation has been transformed into a com­prehensive document (see the ­Education Plan for RRT Implementation at http://links.lww.com/AJN/A19).

RRT Protocol Draft for Review

Current evidence supports the effectiveness of an RRT in decreasing adverse events in patients who exhibit specific clinical parameters. Evidence-based recommendations include that RRTs should be available on general units of hospitals, 24 hours a day and seven days a week, staffed by intensive care clinicians, and activated based on established clinical criteria. The RRT serves a dual purpose of providing both early intervention care to at-risk patients and education in recognizing and managing these patients to clin­ical staff.

The RRT is available to respond to and assist bedside staff in caring for patients who develop signs or symptoms of clinical deterio­ration.

RRT Members

RRT members are all ACLS certified. They include:

Team Leader: Acute Care NP Hospitalist (credentialed in advanced procedures)

Team Members: ICU RN

Respiratory Therapist (trained in intubation)

Physician Intensivist (ICU MD on call and available to the RRT)

Hospital Chaplain

Initiation of RRT Consult

An RRT consult can be initiated by any bedside clinician. Consults should be initiated based on the following patient status criteria.

RRT Consult Initiation Criteria

9781469813271_79-1_FIG.jpg

9781469813271_80-1_FIG.jpg

Scope of the RRT

The RRT can be expected to perform any/all of the following interventions:

Nasopharyngeal/oropharyngeal suctioning

Oxygen therapy

Initiation of CPAP

Initiation of nebulized medications

Intravenous fluid bolus(es)

Intravenous fluid bolus(es) with medication

CPR

The RRT can be expected to perform any/all of the following invasive procedures:

Endotracheal intubation

Intravenous line insertion

Intraosseous line insertion

Arterial line insertion

Central line insertion

RRT Consult Procedure

1. Assess patient relative to the above criteria.

2. If any of the above criteria are identified, initiate the RRT consult by calling 5-5555. The operator will request the caller's location, the patient's name, the patient's location, and the reason for RRT activation. This call will generate both pages to the RRT members and an overhead announcement.

3. The RRT will arrive within five minutes (or less) of the call.

4. Be prepared to provide the RRT with appropriate information about the patient using the SBAR communication method. (See standardized communication protocol no. 7.)

5. While awaiting the arrival of the RRT, consider initiating any/all of the following actions:

    • Call for a colleague to help you

    • Set up oxygen apparatus

    • Set up suction apparatus

    • Call for the code cart to be brought to the area

    • Communicate with the patient's family (if present); tell them what you're doing and why and that someone will be here shortly to help them

    • Obtain proper documentation tools to be used during the RRT consult

RRT Arrival

When the RRT arrives:

1. Provide information as indicated above.

2. Participate in the care of your patient and remain with the patient and the RRT.

3. Assist the RRT as needed.

4. Document activities, interventions performed, and patient responses to interventions.

5. Work with the chaplain to ensure that the patient's family is informed of the situation at intervals.

6. Assist in arranging for transfer of the patient to a higher level of care if indicated.

7. Provide a detailed report to the nurse accepting the patient on the receiving unit, utilizing the SBAR communication method.


ACLS = advanced cardiac life support; cc = cubic centimeters; CPAP = continuous positive airway pressure; CPR = cardiopulmonary resusci­tation; hr = hours; HR = heart rate; ICU = intensive care unit; LOC = level of consciousness; MD = medical doctor; min = minute; mmHg = ­millimeters of mercury; NP = nurse practitioner; RN = registered nurse; RR = respiratory rate; RRT = rapid response team; SBAR = ­situation-background-assessment-recommendation; SBP = systolic blood pressure; SpO2 = arterial oxygen saturation; UOP = urine output; WBC = white blood count.

REFERENCES

  1. Choo CL, et al. Rapid response team: a proactive strategy in managing haemodynamically unstable adult patients in the acute care hospitals. ­Singapore Nursing Journal 2009;36(4);17-22.

  2. Winters BD, et al. Rapid response systems: a systematic review. Crit Care Med 2007;35(5):1238-43.

  3. Hillman K, et al. Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial. Lancet 2005;365(9477):2091-7.

  4. Sharek PJ, et al. Effect of a rapid response team on hospital-wide mortality and code rates outside the ICU in a children's hospital. JAMA 2007;298(19):2267-74.

  5. Mailey J, et al. Reducing hospital standardized mortality rate with early interventions. J Trauma Nurs 2006;13(4):178-82.

  6. Dacey MJ, et al. The effect of a rapid response team on major clinical outcome measures in a community hospital. Crit Care Med 2007;35(9):2076-82.

  7. Benson L, et al. Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf 2008;34(12):743-7.

  8. Hatler C, et al. Implementing a rapid response team to decrease emergencies. Medsurg Nurs 2009;18(2):84-90, 126.

  9. Bader MK, et al. Rescue me: saving the vulnerable non-ICU patient population. Jt Comm J Qual Patient Saf 2009;35(4):199-205.

10. DeVita MA, et al. Use of medical emergency team responses to reduce cardiopulmonary arrests. Qual Saf Health Care 2004;13(4):251-4.

Table 1. Plan for Measuring RRT Success (Draft for Discussion)

9781469813271_9TT1_FIG.jpg

Agenda and timeline. The team meets to draft the meeting agenda, implementation timeline, and budget. Carlos explains the purposes of a meeting agenda: to serve as a guide for the participants and to promote productivity and efficiency. They draft an agenda that includes the key issues to be shared with the stakeholders as well as time for questions, feedback, and discussion (see the Rapid Response Team Kick-off Meeting Agenda at http://links.lww.com/AJN/A20).

Carlos describes how the timeline creates a structure to guide the project (see Table 2 at http://links.lww.com/AJN/A21). The team further discusses how it can maintain the project's momen­tum by keeping it moving forward while at the same time ­accommodate unexpected delays or resistance. There are a few items on the timeline that Carlos thinks may be underestimated—for example, the team may need more than a month to meet with other departments because of already heavily scheduled calendars—­but he decides to let it stand as drafted, knowing that it's a guide and can be adjusted as the need arises.

Budget. Carlos discusses the budget with the team. Rebecca shares a list of what she thinks they'll need for the project and the team decides to put this information into a table format so they can more easily identify any missing information. Before they construct the table, they walk through an imaginary RRT call to be sure they've thought of all the budget implications of the project. They realize they didn't include the cost of each employee attending an education session, so they add that figure to the budget. They also realize that they're missing hourly pay rates for the different types of employees involved. Carlos tells Rebecca that he'll work with the Human Resources Department to obtain this information before the meeting so they can complete the budget (see Table 3).

REVIEWING THEIR WORK

The next time they meet, the EBP team reviews the agenda for the meeting and the documents they'll be presenting. The clerical person on Rebecca and Chen's floor (some­times called the unit secretary) has kept a record of who's attend­ing the meeting and the team is pleased that most of the stakeholders are coming. Carlos informs the team that he received notification that their internal review board submission has been approved. They're excited to check that step off on their EBP Implementation Plan.

Carlos suggests that they discuss the kick-off meeting in detail and brainstorm how to prepare for any negative responses to their project that might occur. Rebecca and Chen remark that they've never considered that someone might not like the idea of an RRT. Carlos says he's not surprised; of­ten the passion that builds around an EBP project and the hard work put into it precludes taking time to think about “why not.” The team talks about the importance of stopping occasionally during any project to assess the environment and par­­ticipants, recogniz­ing that people often have different perspectives and that everyone may not support a change. Carlos reminds the team that people may simply resist changing the routine, and that this can lead to the sabotage of a new idea. As they explore this possible resistance, Rebecca shares her concern that with everyone in the hospital so busy, adding something new may be too stressful for some people. Carlos tells Rebecca and Chen that helping project participants realize they'll be doing the same thing they've been doing, just in a more efficient and effective way, is generally successful in helping them accept a new process. He reminds them that many of the people on the RRT are the same people who currently take care of patients if they code or are admitted to the ICU; however, with the RRT protocol, they'll be intervening earlier to improve patients' outcomes. The team feels confident that, if needed, they can use this approach at the kick-off meeting.

Table 3. RRT Project Budget Draft (Draft for Discussion)

9781469813271_9TT3_FIG.jpg

CONDUCTING THE KICK-OFF MEETING

Rebecca and Chen are both nervous and excited about the meeting. Carlos has made sure they're well prepared by helping them set up the meeting room, computer, PowerPoint presentation, and handout packets containing the agenda and draft documents. The team is ready, and they've placed themselves at the head of the ta­­ble so they can be visible and accessible. As the invitees arrive, they welcome each one individually, thanking them for participating in this important meeting. The team makes sure that the meeting is guided by the agenda and moves along through the presentation of information to thoughtful questions and a lively discussion.

Join the EBP team next time as they launch the RRT project and tackle the real-world issues of project implementation.

Lynn Gallagher-Ford is assistant direc­­tor of the Center for the Advancement of Evidence-Based Practice at Arizona State University in Phoenix, where Ellen Fineout-Overholt is clinical pro­fessor and director, Susan B. Stillwell is associate di­­rector, and Bernadette Mazurek Melnyk is dean and distinguished foundation pro­­fessor of nursing at the College of Nursing and Health Innovation. Contact author: Lynn Gallagher-Ford, [email protected].

REFERENCE

  1. Choo CL, et al. Rapid response team: a proactive strategy in man­­aging ­haemodynamically unstable adult patients in the acute care hospitals. Singapore Nursing Journal 2009;36(4);17-22.

By Lynn Gallagher-Ford, MSN, RN,
NE-BC, Ellen Fineout-Overholt,
PhD, RN, FNAP, FAAN, Bernadette
Mazurek Melnyk, PhD, RN, CPNP/
PMHNP, FNAP, FAAN, and Susan B.
Stillwell, DNP, RN, CNE

Rolling Out the Rapid Response Team

The pilot phase begins.

This is the 10th article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

In March's evidence-based prac­­tice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital's expert EBP mentor, and Chen M., Rebecca's nurse colleague, conducted their stakeholder kickoff meeting to explain to rapid re­­s­ponse team (RRT) members and stakeholders the details of their plan to implement an RRT at their institution. At the meeting, the stakeholders were engaged and supportive, offering valuable feedback and suggestions to enhance the project. By the end of the meeting, all RRT members and their respective managers committed to participate. No major changes were made to any of the draft documents; however, one minor adjustment was made when the advanced practice nurse (APN) hos­pitalist suggested that the EBP team include all the systemic inflammatory response syndrome (SIRS) criteria in the RRT protocol.

Among the many commitments made by stakeholders to move the project forward were the following:

• The Finance Department representative offered, during the dis­cussion of RRT project outcomes, to determine the cost per day of unplanned ICU admissions (UICUA) and to create a report to establish the baseline average length of stay for the UICUA in their hos­pital (for a list of outcomes, see Table 1 in “Implementing an Evidence-Based Practice Change,” March).

• The Health Information Management Systems/Medical ­Records Department representative committed to create a data documentation tool to facilitate the collection from completed RRT records of the following: code rates outside the ICU, RRT response time and duration, UICUA, and RRT events that prevent ICU stays.

• The vice president of medical affairs and the APN hospitalist agreed to notify the hospital's medical staff of the RRT project in a letter and in the staff's monthly newsletter; they also agreed to address any questions medical staff might have about the project.

• The Quality/Performance Improvement Department director suggested that she, Carlos, Rebecca, Chen, and the project's pilot unit quality council rep­­resentative have a follow-up meeting to organize the outcomes data collection and reporting processes needed to dem­­onstrate the success of the project.

After the meeting, Rebecca, Chen, and Carlos reviewed how it went and were pleased by what they had accomplished as a team. Now they're ready to begin the RRT implementation, guided by their overall plan and by the pro­­j­­ect timeline they'd created earlier.

PREPARING FOR THE RRT PILOT LAUNCH

As they get ready to initiate the pi­­lot project, Rebecca, Chen, and Carlos refer to the EBP Implementation Plan (see Figure 1 in “Fol­­lowing the Evidence: Planning for Sustainable Change,” January) to de­­termine their next steps. They al­­ready identified their own clin­ical unit as the RRT pilot unit and in­­volved their nurse manager and clinical educator, so they've com­­pleted checkpoint six. Now they pre­­pare a “to do” list of the ac­­tiv­­ities they need to complete prior to the RRT pilot launch (see ‘To Do' List for RRT Pilot Rollout).

Rebecca and Chen attend their unit's upcoming staff meetings to introduce the evidence-based RRT project to the staff nurses. They ask the unit's clinical educator, Susan B., to attend too, so she can share the schedule for the RRT education program; that way, staff can plan to attend one of the in-services before the RRT pro­­­­ject be­­gins. At the staff meetings, the EBP team explains the pro­­ject, the reasons for and impor­­tance of the pilot phase that will take place on their unit, and expresses appreciation for their colleagues' support.

Although the staff is supportive of the project, they're concerned about being the “test” unit. The EBP team acknowledges these con­­cerns and, after the staff meet­­ings are over, discusses them with the unit's nurse manager, Pat M. Carlos suggests that they implement the RRT only on the day shift for the first week of the project so that Rebecca and Chen can be available to the staff during the first RRT calls. He says the pre­­s­­­ence of the EBP champions dur­­ing initial RRT implementation on the unit is critical, be­­cause they can

• provide expertise and education.

• support their staff colleagues.

• monitor RRT response time.

• observe interactions between the RRT and staff.

• obtain immediate feedback about the RRT process.

• identify any problems with the RRT process.

• speak with any resisters to the RRT project.

• work with the nurse manager (or other departmental leadership) to address resistance.

• make timely adjustments to the RRT process, if needed.

• provide immediate feedback to the RRT and staff.

‘To Do' List for RRT Pilot Rollout

• Attend pilot unit staff meetings

• Create poster and/or flyer to inform staff of rollout date

• Order “RRT Launch” buttons

• Meet with Quality/Performance Improvement Department director and unit-based quality council representative

• Meet with Clinical Informatics Department to develop electronic data documentation tool

• Make sure collecting outcomes measures is possible

    9781469813271_circle_FIG.jpg Finance Department follow-up

    9781469813271_circle_FIG.jpg Health Information Management Systems/Medical Records Department follow-up

• Check with RRT members to make sure they're ready to go

Pat agrees and commits to us­­ing the small number of budgeted per diem staff hours needed to al­­low Rebecca and Chen to adjust their work hours during the first week of the rollout.

Rebecca meets with the Qual­­ity/Performance Improvement ­Department director and quality coun­­cil representative to make a plan for outcomes data col­­lec­tion, analysis, and reporting. At the meeting, the quality department di­­rector describes a tool her de­­part­­ment uses to present out­­comes data, called a “dashboard.” Re­­sembling the dashboard of a car, the tool schematically portrays the status of a number of quality initiatives and how they're pro­­gressing toward meeting their goals; it makes it possible to get a  comprehensive and concise pic­­ture of many critical perfor­­mance indicators at a glance. They dis­­cuss the project outcomes to be measured, how they'll obtain the raw data, and the estimated amount of RRT data they can ex­­pect. The quality department director and council representative agree that the volume of data seems relatively small, and they of­­fer to enter the raw data into the clinical unit's quality/perfor­­mance improvement database so it can be included on the dashboard if Rebecca and Chen forward it to them by the 15th of each month. Rebecca and Chen enthusi­­astically commit to this monthly timeframe.

Next, Rebecca and Chen meet with the Clinical Informatics De­­partment nurse, Karen H., to dis­­cuss creating a data documen­­ta­tion tool for staff and RRT mem­­bers to use that can be accessed from the electronic medical record. They describe the RRT project to Karen and share the protocol with her. After reviewing the documents and getting answers to her questions, Karen recommends that rather than create a whole new tool for this project, they modify their current code blue documen­­tation tool. Karen and the team review the code sheet together and agree that modifying the current tool makes sense because

• it's more efficient than creating a new tool.

• it'll be easier for staff to learn the revised tool since it's based on one with which they're already familiar.

Karen commits to creating the documentation tool, but tells Rebecca and Chen that it'll be at least two weeks before she can begin because there are many ot­­her informatics projects ahead of theirs in the queue. This two-week delay isn't a problem for Rebecca and Chen. They have designed flexibility into their im­­plemen­ta­tion plan; therefore, this wait will not push back the rollout. The RRT documentation tool is deliv­ered in two weeks as promised, so Susan B., the clinical educator, is able to include it in the in-services, which are conducted on schedule.

Days before the RRT pilot's of­­ficial rollout, Rebecca, Chen, and Carlos meet to review their final preparations, check in with Pat, the nurse manager, and Susan, the clinical educator, and post the RRT rollout flyers around the unit (see RRT Rollout Flyer). Rebecca and Chen tell Carlos they want to create a “spirit of celebration” on the morning of the rollout to get people excited about it. They decide to bring breakfast and give out “RRT Launch” buttons on rollout day. Carlos agrees that it's a great idea to try to make the first day of a new process positive and mem­orable. He particularly likes the idea of giving out buttons that will serve as visual trig­gers that something new and ex­­citing is about to happen.

RRT Rollout Flyer

RAPID RESPONSE TEAM

STARTS AUGUST 1, 2011

9781469813271_90-1_FIG.jpg

Key Points to Remember:

An RRT consult can be initiated by any bedside clinician.

The RRT will arrive within five minutes (or less) of the call.

The full RRT protocol is posted at the nurses' station and in the policy book.

RRT consult procedure:

1. Assess patient using the RRT protocol.

2. If any RRT criteria are identified, 234 the RRT consult by 234 The operator will request your location, the patient's name, the patient's location, and the reason for RRT ­activation.

3. Provide the RRT with information about the patient using the SBAR reporting protocol.

While waiting for the RRT to arrive:

Initiate any/all of the following actions:

• Call for a colleague to help you.

• Set up oxygen apparatus.

• Set up suction apparatus.

• Call for the code cart to be brought to the area.

• Communicate with the patient's family (if present); tell them what you're doing and why and that someone will be there shortly to help them.

• Obtain proper documentation tools to be used during the RRT consult.

When the RRT arrives:

1. Provide information using SBAR.

2. Participate in the care of your patient and remain with the patient and the RRT.

3. Assist the RRT as needed.

4. Document activities, interventions performed, and patient responses to interventions.

5. Ensure that the patient's family is informed of the situation at reasonable intervals.

6. Assist in arranging for transfer of the patient to a higher level of care if indicated, and provide a de­­tailed report to the receiving nurse, using SBAR.

If you have any questions, please contact Rebecca or Chen @ x1234.

Thank you for your support of this evidence-based initiative!

THE RRT PILOT ROLLOUT

On the first day of the rollout, Rebecca, Chen, and Carlos are on the unit before the day shift begins. They decorate the lounge, invite the staff to enjoy a compli­mentary breakfast when they take their break, and give every staff member a button to remind them to spread the word that it's RRT Launch Day.

A patient is stabilized. Al­­though the first three days begin and end with no RRT calls, on the fourth day, while Rebecca is working, one of her nurse colleagues, Jessica T., approaches and asks her to come and look at a patient she thinks is decompensating. As they proceed to the patient's room, they take a copy of the RRT pro­tocol from the nurse's desk as a guide. Jessica, the bedside nurse, as­­sesses her patient and de­­ter­mines that the patient meets the criteria for calling the RRT. She follows the RRT protocol step-by-step, while Rebecca stays close by to support her. The team arrives within five minutes and there is a flurry of activity. Jessica and the RRT all work together to care for the patient. As a result of their timely interventions, the patient is stabilized and remains on the unit.

After most of the RRT members leave, Jessica, Rebecca, and the ICU nurse on the RRT sit together for a few minutes to debrief the RRT call and experience. The ICU nurse tells Jessica what a great job she did assessing and caring for her patient. Jessica appreciates the compliment and feels good about the RRT intervention and outcome. Rebecca tells both nurses how well they shared their knowledge and skills to turn a potentially challenging situation into a wonderful learning experience. The nurses express to Rebecca how satisfying it was to know they were giving this patient the best care possible. Rebecca is pleased by how well the RRT process worked and how positive the experience was for everyone involved. Rebecca calls Carlos and Chen to share with them the great success of their first RRT consult. The EBP team is happy the first test of the RRT intervention is over and that it was a success!

A patient codes. The RRT pilot continues to proceed well until its third week, when Chen arrives at work and finds that a patient coded on the unit the day before and was transferred to the ICU; the RRT was never utilized. Chen contacts Carlos and shares this information and her concerns with him. Carlos offers to review the patient's chart that afternoon with the APN hospitalist to deter­mine if this patient had been an appropriate RRT candidate and what, if any, follow-up would be appropriate.

Carlos meets with Rebecca, Chen, and Pat, the nurse manager, the following day to discuss his findings. He informs them that the patient was indeed an appropriate candidate for an RRT consult; however, there's no clear ­indication in the documentation as to why the RRT wasn't called by the staff nurse who cared for the patient that day. They decide that Pat and Rebecca will talk with the staff nurse, Joanne S., to hear why, from her perspective, the RRT consult wasn't initiated. When Pat and Rebecca meet with Joanne, they ask her first whether she had attended an RRT in-service and had known the RRT was available.

“Yes, I went to the in-service,” Joanne says, “but I never thought about the new RRT thing the other day.” She continues: “I've been a nurse for 25 years and I know when a patient is going bad, how to call a code, and that our ICU is always there when needed.” In response to Rebecca's further questions: was there a particular reason Joanne chose not to use the RRT, would she be willing to use it in the future, and what would be helpful in encouraging her to use it in the future, Joanne responds, “I'm not opposed to new ideas; after all, there's a new idea on this unit every day, for goodness' sake! I might use this new team someday, but I have to see how it works for other people first. I'm just not sure about it yet.”

Pat M. recognizes that this is a critical moment in the EBP project implementation process where she, as nurse manager, needs to provide leadership. She recalls a list of key strategies ­Carlos had shared with her regarding the manager's role in the successful implementation of an EBP project (see Managers' Key Strategies to Promote Successful Implementation of an EBP Project). She utilizes several of these strategies in her discussion with Joanne, particularly those that focus on her expectations of both leadership and staff. Joanne agrees to review the RRT criteria and protocol. Rebecca reminds Joanne that the purpose of the RRT is to improve patient outcomes. Joanne says she'll try to remember to use it next time.

After the meeting with Joanne, Pat and the EBP team meet and agree that this missed opportunity wasn't related to the RRT process. Instead, it concerned a single individual who seemed to be resistant to a change in practice. They decide that there's no need to follow up with the entire staff at this time, and that Rebecca will check in with Joanne in a few days. Carlos reminds the team that resistance to change is common and that paying timely, direct attention to situations like Joanne's is an effective strategy to get and keep everyone on board with an evidence-based project. Carlos congratulates Pat and the EBP team on their handling of this situation.

While they're together, the team uses this opportunity to review how the project is proceeding overall and to update their EBP Implementation Plan. After they check off several items in checkpoints seven, eight, and nine, such as addressing stakeholder concerns, launching the project, and reviewing its progress, they turn back to Pat and ask her for any feedback on the launch. She says that she's been talking with the nursing staff and attending physicians regularly over the past three weeks and is excited to share with the team that the feedback has been overwhelmingly positive. Pat believes that the team's extensive planning has been critical to the project's success. Pat ends by saying, “In my opinion, there have been no real problems or major setbacks.”

Rebecca tells the team that she has communicated with both the Health Information Management Systems/Medical Records Department director and the Finance Department manager, and they've been successful in col­lecting the data they committed to collect at the kickoff meeting. Chen has been following up on how well the electronic data documentation tool has been working for the staff and RRT members. Some minor adjustments were made on the tool by the clinical informatics team over the three-week pilot; however, overall, the tool has worked very well. The EBP team agrees that the success of the experience on the pilot unit has made them confident about rolling out the program hospital-wide. They make a special note to continue to monitor the RRT processes as utilization of the RRT in the hospital increases. As the fi­nal step in the pilot, the EBP team contacts each of the key stake­holders to obtain feedback about the pilot and inform them of the ­hospital-wide rollout.

Managers' Key Strategies to Promote Successful Implementation of an EBP Project

1. Become an expert on the EBP project and activities implemented on the unit.

2. Communicate information about the EBP project with staff as early and often as possible.

3. Encourage staff feedback about the EBP project.

4. Speak positively about the EBP champion(s) and the EBP project.

5. Demonstrate, through actions, support of the EBP champion(s) and the EBP project.

6. Set clear expectations for staff regarding the EBP project and related activities.

7. Provide support and resources to staff as the EBP project is implemented and integrated.

8. Be present and available to staff during critical phases of EBP project implementation.

9. Hold staff accountable to the EBP project and related activities.

10. Provide timely follow-up or redirection if evidence-based activities are not carried out (whether it be by an individual or group).

11. Acknowledge staff efforts toward successful implementation of the EBP project (highlight specific staff if possible).

12. Celebrate milestones during the EBP project.

THE HOSPITAL-WIDE ROLLOUT

When the EBP team meets to plan the hospital-wide rollout, they discuss the feedback they received from stakeholders, pilot unit leadership, and staff. They confirm that each member of the RRT is prepared for the hospital-wide rollout to begin. Carlos then leads the team through a structured discussion of how they'll rollout the RRT protocol to all hospital units. They determine that to replicate their pilot unit success, they'll need the buy-in of the nurse manager and clinical educator on every unit and to identify an RRT staff nurse cham­pion on each unit. The EBP team decides to request time to introduce the project and present the proposed timeline at next month's nurse manager, clinical educator, and EBP council meetings in or­der to finalize the hospital-wide rollout plan with these key individuals.

When Rebecca and Chen attend the council meetings, they find that most of the participants are already aware of the RRT pro­­ject, as it has received much attention and praise throughout the hospital over the past several months. The nurse managers are eager to adopt the program on their units, and they commit to sup­­port and promote the project. They also ask some excellent ques­­tions. The pediatric manager asks, “Will the RRT respond to pediatric patients and newborns in the nursery?” The obstetrics manager asks, “Will the RRT respond to obstetric patients who are having nonobstetrical clinical problems?” The endoscopy suite manager asks, “Can we call the RRT for outpatients?” Rebecca and Chen don't have ­immediate answers for all of these questions. They tell the nurse managers that they'll take their questions back for the whole EBP team to discuss and promise they'll have answers within a week. The clinical edu­cators are very supportive of the project and Susan B. has already begun to work with them to plan their staff in-services. The EBP council representatives are also quite positive: they tell Rebecca and Chen that they've discussed the RRT project and unanimously decided they'll be “the best RRT champions ever.” The EBP team is pleased with the enthusiasm and support from every group. They feel confident about proceeding with the EBP implemen­tation process and rolling out the RRT hospital-wide.

Join the EBP team next time to learn the results of the hospital-wide rollout, how outcomes data were collected and evaluated, and about their plans to disseminate the results of their experiences so others can learn from them.

Lynn Gallagher-Ford is clinical assis­tant professor and assistant director of the Center for the Advancement of Evidence-Based Practice at Arizona State Univer­sity in Phoenix, where Ellen Fineout-Overholt is clinical pro­fessor and di­­rec­tor, Susan B. Stillwell is clinical pro­­fessor and associate director, and Ber­na­­dette Mazurek Melnyk is dean and dis­tin­­guish­­ed foundation professor of nurs­­ing at the College of Nursing and Health In­­no­­­vation. Contact author: Lynn Gallagher-Ford, [email protected].

Part IV: Disseminating the Evidence and Sustaining the Change

By Ellen Fineout-Overholt, PhD, RN,
FNAP, FAAN, Lynn Gallagher-Ford,
MSN, RN, NE-BC, Bernadette
Mazurek Melnyk, PhD, RN, CPNP/
PMHNP, FNAP, FAAN, and Susan B.
Stillwell, DNP, RN, CNE

Evaluating and Disseminating the Impact of an Evidence-Based Intervention: Show and Tell

After the data are gathered and analyzed, it's time to share what you've learned.

This is the 11th article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The purpose of this series has been to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time.

In the previous article in this series, Carlos A., Rebecca R., and Chen M. completed the unit-based pilot phase of the rapid response team (RRT) rollout. They found that the RRT worked well, and they are now ready to evaluate its impact on their chosen outcomes. The hospital leadership as well as the staff had agreed upon the following outcomes: code rates outside the ICU (CRO), unplanned ICU admissions (UICUA), and hospital-wide mortality rates (excluding do-not-resuscitate situations) (HMR). Karen H., the nurse from the Clinical Informatics Depart­ment, and the pilot unit's quality council representative ­devised a mechanism to successfully export the RRT data from the electronic medical record (EMR) to a database that would serve as a repository until the data could be analyzed. The other departments collecting RRT outcomes data have been forwarding their information to Rebecca and Chen, who've asked Karen for help in getting this additional data onto the hospital's quality dashboard. Karen suggests that she and the EBP team meet to discuss ways to upload all of the data to one place and create a single comprehensive and regularly available summary of the RRT outcomes.

At that meeting, Karen suggests that the EBP team work out a plan with the Quality/Performance Improvement Department to ­analyze the data before they're posted on the dashboard, where they'll be available to everyone on the hospital intranet. The EBP team members share their excitement about taking the next step in the EBP implementation process. But when Carlos contacts the director of the department, the director informs him that it may be impossible for quality/performance improvement to take on this project at this time, as their analysts are already overloaded with work. Chen mentions that she's heard that university researchers may be interested in these kinds of projects, and that collaboration with a university might lead to further projects, which could keep the kind of excitement generated by the RRT initiative going. Carlos says that he has some connections at the local university and offers to discuss this opportunity with them.

GATHERING AND EVALUATING THE RESULTS

Carlos calls the dean of research at the hospital's academic partner to inquire about interest in collaborating on the RRT project, particu­larly from a research perspective. The dean says there's a researcher who is very interested in the processes of codes and may want to get on board with their project. Carlos asks about data analysis and interpretation as part of that collaboration, and the dean replies that the university has resources they can use to accomplish that part of the evaluation process. Carlos lets Rebecca and Chen know of this opportunity and sends an e-mail to Debra P., the faculty researcher, outlining the RRT project and asking if she's interested in participating. Debra responds the next day, indicating her delight to be involved. The EBP team is excited that they'll have this opportunity to partner with the local university and accomplish their goal of performing data analysis.

Carlos discusses the initial RRT data with Debra, and they analyze it together. First, they look at the mean outcomes of CRO, HMR, and UICUA that were obtained from the real-time RRT reports. When they compare these outcomes over time, they see that the mean CRO was reduced, but that the mean HMR and UICUA hadn't changed from baseline. Debra asks whether there was any variation in the occupancy rate over the period of the pilot rollout; if there was, then the proportion of patients experiencing codes before and during the rollout might not be comparable. When Carlos replies that the occupancy rate remained consistent, Debra recommends that they conduct an independent t test to see if there's a statistically significant difference between CRO before and after the pilot phase. They find that the decrease in CRO is statistically significant, which means that the RRT had a positive effect on this important outcome that most likely wasn't a chance finding. The EBP team can't wait to share this great news with the unit. The team reviews with Debra the code records and RRT comments to determine if there were any RRT processes that might have had an impact on UICUA and HMR, and thereby explain the lack of a change from baseline. The team also provides Debra with questions about how the pilot went (who called the RRT and why? what challenges did the RRT face?) that they believe would be important to ask the stakeholders during the debriefing after the pilot. Debra says that these questions will be very helpful as she looks over the RRT processes. Having them in mind, she can see if the answers exist in the current data, if more data need to be gathered, or if further questions need to be asked.

After taking time to reflect on these processes, the EBP team works with Debra to revise them. Debra explains that it's important to plan the hospital-wide rollout so that all unit managers and staff are confident they understand the protocol, processes, and desired outcomes. They ask Pat M., the manager of the pilot unit, and two of her EBP champions to relate their experiences with the RRT to the executive leadership team, the unit manag­ers' meeting, and the unit council leadership meeting. The unit man­agers were especially glad to hear Pat's story and her answers to their questions.

As the EBP team continues to discuss plans for a hospital-wide RRT, Debra's suggestions for how to improve the RRT processes in the larger rollout are easily integrated into the plan. For example, she proposes a simple way to examine the outcomes of HMR and UICUA: since ICU deaths were included in the HMR data, she suggests that they ask the Health Information Management Systems/Medical Records (HIMS) Department to compare the ICU deaths that occurred despite the presence of an RRT with those that occurred without an RRT present. Debra explains to the team that these data may help them to have a better picture of the impact of the RRT on HMR. She applies the same approach to UICUA, comparing the ICU admissions of those who'd been treated by the RRT with those who hadn't. She further explains how the team can continue to observe the changes in these two outcomes over time. The EBP team is glad to hear that Debra will continue to help as they collect and analyze these data.

In preparation for the hospital-wide rollout, the EBP council confirms that EBP champions on each unit will be responsible for working with the educators to conduct education sessions about the RRT. Each unit par­ticipating in the rollout has already had three in-services on all shifts, posters put up in the bathroom and staff lounge, and an algorithm posted at the unit hub explaining how to call the RRT. Finally, nurses and secretaries from all units are invited to a meeting at which Debra and the EBP team answer all questions concerning the procedure for calling an RRT.

After the hospital-wide project begins, the EBP team asks HIMS if all is well with the baseline data and how the outcomes data are being collected. HIMS informs them that indeed the staff is doing a terrific job of entering the data into the EMR. The initial RRT reports indicate that the hospital-wide rollout is going well and that the RRT protocol is being used appropriately. When the EBP team informally interviews EBP council members, they find that everyone is seeing the difference the RRT is making—and not only in the outcomes. Clinicians, for example, are experiencing a difference in how they're helping patients avoid those outcomes. This pleases the EBP team and they look forward to sharing this serendipitous finding.

PREPARING TO DISSEMINATE THE RESULTS

As the EBP team discusses how to disseminate the results of their project, they reiterate their commitment to involve the EBP council members, who have made such a major contribution to the project's success. Debra suggests that they hold a special meeting with unit managers to answer their questions, and to give them an overview of the dissemination plan, including the impact it may have on each unit's budget. The meeting with the managers turns out to be a lively discussion about the value of dissemination and its related costs. The managers are concerned that presenting the results of the RRT intervention at conferences is not a budgeted item for this year; they're also concerned about the challenges these opportunities will present, such as being able to support the scholarship of those clinicians whose work is accepted.

The EBP team helps the unit managers to understand that each time a clinician presents an aspect of the RRT process or outcome, the unit and hospital get positive exposure. Eventually most managers agree that dissemination is a worthwhile investment and commit to be as creative and flexible with their budgets as possible as they plan for the next fiscal year. They discuss how important it is to support these new learning and development opportunities for their staff. One unit manager, however, says that there's no way she can support anyone from her unit presenting at a conference. The EBP team informs her that several manuscripts about the RRT will be submitted for publication, which creates the perfect opportunity for those who wish to contribute, but who may not have the budget this year, to support the presentations.

Dissemination Workshop Agenda

Joint session (one hour)

Dissemination: Purposes and Passions

           • What outcome do you want to achieve by disseminating your results?

           • Discussion

Methods of Dissemination

           • Determine which method of dissemination is the best match for your message or outcome or both.

           • Determine which method capitalizes on your strengths.

           • Discussion and demonstration or case study

Breakout sessions (one hour)

Publishing: Who, What, When, Where, and How of Publishing

Presentations: Effective, Fun Presentations People Will Remember

The EBP team decides to hold a continuing education workshop on dissemination. They invite the EBP council members to come and bring anyone from their units who has been involved in the RRT project and is interested in contributing to presentations or publications about it. In preparing to conduct this class, the team makes a list of the aspects of the RRT project that would be important to include in a presentation or publication or both. They work out an agenda for the workshop (see Dissemination Workshop Agenda). Rebecca, Chen, and Carlos are excited about sharing the outcomes of first the pilot and then the rollout to the whole hospital. They are thrilled that they've made such a difference in their hospital's culture, as well as in patient outcomes.

MAKING DISSEMINATION PLANS

The EBP council, the educators, the RRT, and the EBP team, along with Debra, meet to discuss how to plan for dissemination of the project and its results. They discuss first putting the results of the pilot and then of the hospital-wide RRT rollout on the hospital's intranet. Carlos invites Karen from clinical informatics to join them to discuss the possibility of having an “EBP Corner” on the intranet, where updates can be provided for the latest EBP events. Karen says this is very doable and that she'll get back to them in a couple of days on how to set this up and how they'll be able to contribute to it. Carlos agrees to take the lead for this aspect of the dissemination project.

Presentation Tips

• Keep the outcome that you want for your presentation in mind from the beginning: what do you want the audience to take away?

• Take care with the background and color schemes for your PowerPoint slides. Simple is best.

• Keep your presentation simple, innovative, and interesting. Don't overuse animation or sound.

• Use pictures to enhance, not dominate, the presentation.

• Keep your time frame in mind: usually one slide per minute works well.

• Use no smaller than a 20-point font on a slide if the presentation is for a smaller audience or room, no smaller than a 28-point font for larger rooms or audiences.

• Use text on a slide for sharing highlights and important points, not for everything.

• Revise your presentation at least three to five times before submission.

• Keep backups of the presentation on a jumpdrive (or two)

• Have fun as your create YOUR presentation—be unique.

The EBP council, with mentorship from Rebecca and Chen, expresses the desire to present the RRT project at a professional meeting. The group decides that one of the annual EBP conferences across the country would be the best place to share this project. Debra offers to help council members review the variety of EBP conferences and discuss which would be the best match. She asks them to consider which audience would like to hear about their project and where it could have a meaningful impact. She offers to join them when they start to write and then submit an abstract, and, if it's accepted, to help them put together the presentation. She also shares tips she's used that have served her well (see Presentation Tips).

To the EBP team's great delight, the chief nursing officer pops into the council meeting and tells everyone that she wants to submit this project to the American Organization of Nurse Executives (AONE) annual meeting. She's so excited about the synergy between leadership and staff that she believes this is just what participants at AONE need to hear. Carlos asks the members of the RRT if they'd like to discuss the possibility of presenting their experience at the annual Institute for Healthcare Improvement (IHI) meeting, which he tells the group may be a good venue for this project. They readily discuss sharing how their transdisciplinary team worked together to improve outcomes and other issues from the project that would interest IHI participants. They all agree to engage in this discussion further as the project continues.

Amid all this activity, Rebecca and Chen remind Carlos that there are clinicians who would rather publish than present. Carlos and Debra meet with those who are interested in publishing to provide an overview of the publishing process (see Publishing Tips). They assure those individuals who feel they don't write well enough to publish in a journal that they'll do fine as part of a team.

With plans in hand, the teams of clinicians begin to prepare their abstracts or manuscripts. The presenting teams submit their abstracts to their respective conferences. The writing teams take a little longer to prepare their manuscripts, while their team leaders call or write the journals they've selected to see if there's any interest in articles on various aspects of the RRT. The EBP team reflects on their initial PICOT question and on what a difference just asking the right question and answering it appropriately has made in their hospital.

Publishing Tips

• Know the purpose of your manuscript.

• Determine the audience for your manuscript.

• Determine the journal that best matches the purpose of your manuscript.

• Obtain the author guidelines for this journal.

• Review several journal articles from this journal; noting the structure of these articles can help with structuring your manuscript.

• Send a query letter to the editor.

• Develop an outline for your manuscript; be as descriptive and detailed as possible.

• Divide writing the outline among the authors; all authors should contribute to the manuscript.

• Write, read, rewrite, reread, rewrite, reread, and rewrite your manuscript. Have others read the manuscript and provide feedback; now is the time to get critical feedback to assist in the successful submission to a journal.

• Decide on a relevant title that would compel you to read the manuscript.

• Reread and revise one last time.

• SUBMIT—although rewriting has moved your manuscript toward perfection, don't wait for it to be entirely perfect. Expect journal reviewers to have suggestions and criticism.

• Believe in your message and its benefit to the reader.

Join the EBP team next time as they complete the hospital-wide rollout and make the RRT a hospital policy. In so doing, they will learn how to create system-wide sustainable change.

Ellen Fineout-Overholt is clinical pro­fessor and director of the Center for the Advancement of Evidence-Based Practice at Arizona State University in Phoenix, where Susan B. Stillwell is clinical professor and associate director, Lynn Gallagher-Ford is clinical assistant professor and assistant director, and Bernadette Mazurek Melnyk is dean and distinguished foundation professor of nursing at the College of Nursing and Health Innovation. Contact author: Ellen Fineout-Overholt, ellen.­[email protected].

By Bernadette Mazurek Melnyk,
PhD, RN, CPNP/PMHNP, FNAP,
FAAN, Ellen Fineout-Overholt, PhD,
RN, FNAP, FAAN, Lynn Gallagher-
Ford, MSN, RN, NE-BC, and Susan
B. Stillwell, DNP, RN, CNE, ANEF

Sustaining Evidence-Based Practice Through Organizational Policies and an Innovative Model

The team adopts the Advancing Research and Clinical Practice Through Close Collaboration model.

This is the 12th and last article in a series from the Arizona State University College of Nursing and Health Innovation's Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When it's delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved. The complete EBP series is available as a collection on our Web site; go to www.ajnonline.com and click on Collections.

In July's evidence-based practice (EBP) article, Rebecca R., Carlos A., and Chen M. evaluated the outcomes of their rapid response team (RRT) implementation project. Their findings indicated that a significant decrease in one outcome, code rates outside the ICU, had occurred after implementation of the RRT. This promising finding, together with many other considerations—such as organizational readiness; clini­cian willingness; and a judicious weighing of all the costs, benefits, and outcomes—encouraged the EBP team to continue with plans to rollout the RRT protocol throughout the entire hospital system. They also began to work on presentations and publications about the project so that others could learn from their experience and implement similar interventions to improve patient outcomes.

USING EVIDENCE TO INFORM ORGANIZATIONAL POLICY

Because Rebecca, Carlos, and Chen are concerned about whether the implementation of an RRT can be sustained over time in their hospital, they want to take the necessary steps to create a hospital-­­wide RRT policy. Therefore, they make an appointment with their hospital's director of policies and procedures, Maria P., to share the outcomes data they've gathered from their project and to discuss the project's success so far. Maria is impressed by the rigor of the team's sequential EBP process and the systematic way in which they've gathered the outcomes data. She reminds them that the measurement of outcomes (internal evidence) plus rigorous research (external evidence) result in the best evidence-based orga­nizational policies to guide the high­­est quality of care in health care institutions.

Maria volunteers to assist the team in writing a new evidence-based policy to support having an RRT in their hospital. She suggests that each recommendation in the policy be supported by evidence. Maria explains that once the policy is written, it needs to be approved by the hospital-wide policy committee, representing all of the health disciplines. ­Maria emphasizes that transdisciplinary health care professionals and administra­­tors should routinely be involved when planning and implementing evidenced-based organizational policies. She also reminds the EBP team that translating evidence and evidence-based organizational pol­icies into sustainable routine clin­ical practices remains a major challenge for health care systems.

The new RRT policy written by Rebecca, Carlos, and Chen with Maria's help is approved by the hospital-wide policy committee within three months. Now the challenge for the team is to work with clinicians across the hospital system to implement it. The EBP team schedules a series of presentations throughout the hospital to introduce the new RRT policy. They rotate the days and times of this in-service to capture as many direct care clinicians as possible. To ensure that all clinicians are educated on the new policy, a database is created to track in-service attendees, and each hos­pital unit is asked to appoint a volunteer to deliver the presentation to any clinicians who missed it. Posters are created and buttons designed as visual triggers to remind staff to implement the new policy.

Throughout this process, the EBP team learned that dissemi­nation of evidence alone doesn't typ­­ically lead clinicians to make a sustainable change to EBP, and they were impressed by how important it was to have unit-based champions reinforce the new policy.1 They also learned that it's critical to have an organizational culture that supports EBP (such as evidence-based decision making in­­tegrated into performance ex­pectations, up-to-date resources and tools, ongoing EBP knowledge and skills-building workshops, and EBP mentors at the point of care) in order for clinicians to con­sistently deliver evidence-based care.2

Since the process they followed worked so well, the team believes that their hospital needs to adopt a model to guide and reinforce the creation of a culture to sustain the EBP approach they had initiated through this project. They review several EBP process and system integration models and decide to adopt the Advancing Research and Clinical Practice Through Close Collaboration (ARCC) model because its key strategy to sustain evidence-based care is the presence of an EBP mentor (a clinician with advanced knowledge of EBP, mentorship, and individual as well as organizational change). With Carlos's success as an expert EBP mentor, and the mentorship model working so well, they believe that developing a cadre of EBP mentors system-wide is key to the ongoing implementation and sustainability of EBP in their organization.

SUSTAINING AN EBP CULTURE WITH THE ARCC MODEL

In reviewing the ARCC model, the EBP team finds that its aim is to provide hospitals and health care systems with an organized conceptual framework to guide system-wide implementation and sustainability of EBP for the purpose of improving quality of care and patient outcomes. In addition, this model can be used to achieve a “high reliability” organization (one that delivers safe and high-quality care), decrease costs, and improve clinicians' job satisfaction. Four assumptions are basic to the ARCC model3:

9781469813271_12FF1_FIG.jpg

Figure 1. The ARCC Model for System-Wide Implementation and Sustainability of EBP

ARCC = Advancing Research and Clinical Practice Through Close Collaboration; EBP = evidence-based practice.

a Scale developed.

b Based on the EBP paradigm and using the EBP process.

• Both barriers to and facilitators of EBP exist for individuals and within health care systems.

• Barriers to EBP must be removed or mitigated and facilitators put in place in order for individuals and health care sys­­tems to implement EBP as a standard of care.

• For clinicians to change their practices to be evidence based, both their beliefs about the value of EBP and their confidence in their ability to implement it must be strengthened.

• An EBP culture that includes EBP mentors is necessary in order to advance and sustain EBP in individuals and health care systems.

The first step in the ARCC model is to assess the organization's culture and readiness for EBP (see Figure 1). From that assessment, the strengths and limita­­tions of implementing EBP within the organization can be identified. The key implementation strategy in the ARCC model is the development of a cadre of EBP mentors, who are typically advanced practice nurses or clinicians with in-depth knowledge of and skills in EBP and in individual behavior change and organizational culture change. These individuals, whether expert system-wide mentors, advanced practice mentors, or peer mentors, are focused on helping point-of-care clinicians to use and sustain EBP and to conduct EBP implementation, quality improvement, and outcomes management projects. When clinicians work with EBP mentors, their beliefs about the value of EBP and their ability to implement it increase, and this is followed by a greater achievement of evidence-based care.4 The ARCC model contends that greater implementation of EBP results in higher job satisfaction, lower turnover rate, and better patient outcomes. A series of studies now support the empirical relationships in the ARCC model.4-8

The ARCC model has been and continues to be implemented in hospitals and health care systems across the country with excellent results in quality of care and patient outcomes. Valid and reliable instruments, such as the EBP Beliefs and EBP Implementation scales,6 are used to measure key constructs in the model and, together with organizational culture and readiness for EBP, help to determine the model's effectiveness.6

The EBP team discusses how all the elements of the ARCC model are an excellent fit for their organization. They decide to make a recommendation to the Shared Governance Steering Committee that this model be adopted, not only for the nursing department, but for all disciplines throughout the organization.

THE EBP JOURNEY HAS JUST BEGUN

This series presented a case involving a hypothetical medical–surgical nurse and her colleagues to illustrate how EBP can be successfully implemented to improve key patient outcomes. It's important that the process start with an ongoing spirit of inquiry, and that nurses always question the evidence behind the care we provide and never settle for the status quo. Never forget that it only takes one passionate, committed person to spearhead a team vision to improve care for patients and their families. It also takes persistence through the “charac­ter builders” that are sure to ­appear as the vision comes to ­fruition.

Although the EBP team has successfully completed their RRT implementation project and its incorporation as a hospital-wide policy, their EBP journey has just be­gun. In fact, only days after the project's completion, Rebecca asked Carlos another great PICOT question: “In critically ill patients, how does early ambulation compared with delayed ambulation affect ventilator-associated pneumonia in the ICU?” Carlos looked at her and replied, as a great men­­tor does, “I will help you search for the evidence and we will find the answer to your question—­because EBP, not practices steeped in tradition, is the only way we do it here!”

Bernadette Mazurek Melnyk is associate vice president for health promotion, university chief wellness officer, and dean of The Ohio State University College of Nurs­­ing in Columbus, where Lynn Gallagher-Ford is director of Transdisciplinary Evidence-Based Practice and Clinical Innovation. Ellen Fineout-Overholt is dean of Professional Studies and chair of the Department of Nursing at East Texas Baptist University in Mar­shall, TX. Susan B. Stillwell is clinical professor and associate director of the Center for the Advancement of Evidence-Based Practice at Arizona State Univer­sity in Phoenix. At the time this article was written, Ber­nadette Mazurek Melnyk was dean and distinguished foundation professor of nursing in the College of Nurs­­ing and Health Innovation at Arizona State Uni­versity, where Ellen Fineout-Overholt was clinical pro­fessor and director, and Lynn Gallagher-Ford was clinical assistant professor and assistant director, of the Center for the Advancement of Evidence-Based Practice. Contact author: Berna­dette Mazurek Melnyk, [email protected]. The authors have disclosed no potential conflicts of inter­est, financial or otherwise.

REFERENCES

1. Melnyk BM, Wiliamson KM. Using evidence-based practice to enhance organizational policies, healthcare qual­­ity, and patient outcomes. In: Hinshaw AS, Grady PA, editors. Shaping health policy through nursing research. New York: Springer Publishing Company; 2011. p. 87-98.

2. Melnyk BM, Fineout-Overholt E. ­Evidence-based practice in nursing and healthcare: a guide to best practice. Philadelphia: Wolters Kluwer Health/Lippincott Williams and Wil­kins; 2011.

3. Melnyk BM, Fineout-Overholt E. ARCC (Advancing Research and Clinical prac­­tice through close Collaboration): a model for system-wide implementation and sustainability of evidence-based practice. In: Rycroft-Malone J, Bucknall T, editors. Models and frame­­works for implementing ­evidence-based practice: linking evidence to action. Oxford; Ames, IA: Wiley-Blackwell; Sigma Theta Tau; 2010. p. 169-84.

4. Melnyk BM, et al. Nurses' perceived knowledge, beliefs, skills, and needs regarding evidence-based practice: im­­plications for accelerating the paradigm shift. Worldviews Evid Based Nurs 2004;1(3):185-93.

5. Levin RF, et al. Fostering evidence-based practice to improve nurse and cost outcomes in a community health setting: a pilot test of the advancing research and clinical practice through close collaboration model. Nurs Adm Q 2011;35(1): 21-33.

6. Melnyk BM, et al. The evidence-based practice beliefs and implementation scales: psychometric properties of two new instruments. Worldviews Evid Based Nurs 2008;5(4):208-16.

7. Melnyk BM, et al. Correlates among cognitive beliefs, EBP implementation, organizational culture, cohesion and job satisfaction in evidence-based practice mentors from a community hospital system. Nurs Outlook 2010;58(6):301-8.

8. Wallen GR, et al. Implementing ­evidence-based practice: effectiveness of a struc­­tured multifaceted mentorship programme. J Adv Nurs 2010; 66(12):2761-71.