In the early 2000s, the landmark reports To Err is Human1 and Crossing the Quality Chasm2 highlighted the need to focus on quality of care provided to patients. Since then, clinical teams have been trained to lead quality improvement (QI) and evidence-based practice (EBP) projects. Various QI and EBP frameworks and models help guide clinicians' work, including the Iowa Model of EBP,3 the Stetler Model,4 Plan-Do-Study-Act (PDSA),5 and Six Sigma/DMAIC.6 Although these frameworks and models help clinicians understand what needs to be implemented, they do not provide guidance on how to successfully implement practice changes.
In response, the field of implementation science was developed and has grown over the last decade. Implementation science is the "study of methods to promote the systematic uptake of research findings and other EBPs into routine practice[horizontal ellipsis]to improve the quality and effectiveness of health services."7(p1) The purpose of this article is to provide clinicians with a toolkit for implementation science, including (1) an overview of implementation science models and frameworks, (2) effective implementation science strategies and tips for using the strategies, and (3) recommendations for measuring implementation success. We will end by discussing a case study of an implementation science process.
Implementation Science Models and Frameworks
Applying implementation science models to clinical inquiry work helps break down the complexity of the challenges encountered in translating evidence into clinical practice by providing an organized, structured approach.8 The National Institutes of Health Fogarty International Center9 has published an online toolkit of the various implementation science models and frameworks, which can be a helpful resource (https://www.fic.nih.gov/About/center-global-health-studies/neuroscience-implemen); see Table for an overview of implementation science frameworks. Although many models now exist, 3 of the most commonly used models are (1) Consolidated Framework for Implementation Research (CFIR),10 (2) Promoting Action on Research Implementation in Health Services (PARIHS),11 and (3) Grol and Wensing model of implementation.12
The CFIR model10 provides a menu of constructs that have been associated with effective implementation. The constructs include the (1) characteristics of the intervention, such as the complexity or cost of the practice change, along with the strength and quality of the evidence that supports the practice, (2) inner setting, including the local institution's readiness for change and culture, (3) outer setting, including the needs and resources of the patients and external policies, (4) individuals involved in the practice change, including their self-efficacy, knowledge, and beliefs about the practice change, and finally (5) the implementation process itself, how the change is planned, executed, and evaluated.10 This practical guide can be used to assess potential barriers and facilitators that may affect the success of the implementation before beginning the initiative.
Another implementation science model, PARIHS,11 proposes that successful implementation of research findings in practice is a function of the relationship between 3 constructs: (1) evidence, (2) context, and (3) facilitation. Each construct is measured from low to high; ideally, all constructs should be "high" for implementation to be successful. First, the evidence supporting the practice change should be of high quality. There are 4 types of evidence bases, including research, the clinician's expertise and experience, patient's preferences and needs, and data from the local context, such as evaluation data. Next, the context, including the culture and leadership at the local level, should be supportive and ready for change. In addition, the change should have relevance and should fit the organization's needs, and adequate resources to support the practice change should be available. Finally, the change should be well-facilitated (eg, how the change is implemented), which is a process that depends on the person or facilitator carrying out the practice change.11
Finally, the Grol and Wensing model of implementation12 provides step-by-step processes, walking clinicians through how to successfully implement changes (Figure 1). The first section of the model provides guidance on how implementation science projects are identified, usually stemming from either new scientific information or from problems and concerns identified in clinical practice. Once an opportunity has been identified, a plan is developed to organize the change, including identifying a motivated team with a representation of end-users (eg, those who will be affected by the practice change) and leaders (eg, management). Next, a proposal for the change implementation and analysis of the actual performance of the targeted users is developed. An understanding of the actual care delivered before a practice change is initiated is needed to identify aspects that need to be changed. The next step is to conduct a problem analysis to better understand the barriers and facilitators (known as "determinants") to the practice change and to determine and develop implementation science strategies that will help mitigate these concerns. Once implementation science strategies have been determined, next in the process is to test and execute the implementation science plan. If the change is successful, the team should integrate the changes in routine care and continuously evaluate and adjust the plan to sustain the improvements.
Toolkit
In planning for an EBP or QI project, clinicians should first develop a team, including key stakeholders, and develop a project charter. Project charters help teams stay on track by formally stating the aim(s) of the project, rationale of the project, and metrics that will be used to measure the success of the project. It also serves as a communication tool to inform the team and leadership.13 The Institute for Healthcare Improvement provides tools for developing a charter. Next, the clinicians should understand the barriers and facilitators to the practice changes, which will help identify effective implementation science strategies. Clinicians can also use this information to preemptively mitigate concerns or issues that may arise during the implementation science project. Identifying determinants can be done through surveys, interviews, observation, or from informal rounding on the unit and asking clinicians about their concerns, barriers, and facilitators. Other tools that can be used to identify barriers and facilitators are cause-and-effect diagrams, driver diagrams, Pareto charts, or flowcharts of a process.14 Common factors affecting practice change include things such as clinicians not being aware of the EBP; lacking the time, funding, resources, or motivation to change; poor self-efficacy in applying the EBP; and/or the EBP is complex and challenging to integrate into clinical work.12 Conversely, facilitators to successful implementation science projects include strong leadership support; cost-effective, easy-to-use, or time-saving EBP; flexibility in adapting the innovation; improvement of the relationship between nurses and patients/families due to the EBP; and the innovation's contribution to nurses' professional development and organizational goals.15
The barriers and facilitators will help determine which implementation science strategies may work well. Tailoring strategies to identified barriers is not a straightforward process as there is no unique match between strategies and barriers identified, but using a "common sense" approach to selecting strategies is often most feasible. For example, if a significant barrier identified by end-users is that clinicians lack knowledge of an EBP, then some type of educational strategy is needed. However, if an EBP was already known by clinicians (eg, hand hygiene compliance inside and outside patient rooms), then educational strategies would be ineffective. If compliance with the practice was poor, then another strategy, such as audit and feedback, may be more appropriate. Understanding the barriers will help prevent clinicians from automatically using the "same old" implementation science strategies they have used in the past.
Implementation Science Strategies
The literature provides a wide variety of strategies to implement EBPs, such as printed educational materials, educational meetings, educational outreach visits, e-learning, reminders, and audit and feedback. Cochrane Effective Practice and Organization of Care Group (or EPOC, https://epoc.cochrane.org/our-reviews) has conducted systematic reviews on various implementation science strategies and can be a helpful resource in identifying potential strategies. Having a solid implementation science plan in place before beginning a project can help the project be more successful. Grol and colleagues12 recommend that strategies be active rather than passive and tailored to the identified barriers and that clinicians should consider using multiple strategies. Next, we will provide a brief overview of different types of implementation science strategies and tips for success.
PRINTED EDUCATIONAL MATERIALS
Printed educational materials can encompass many different items, such as brochures, flyers, tip sheets, or emails.16 These materials can be passively disseminated (such as through posting a flyer on the unit or sending an email), or they can be engaging, interactive self-study packages. Active self-study is more likely to improve performance, whereas passive dissemination of materials is less effective because it assumes that clinicians will read the information and change their practice. No matter the innovation, some type of educational materials is generally used in a multifaceted improvement program. Giguere and colleagues16 reviewed 84 studies that utilized printed educational materials and found that this implementation science strategy may slightly improve the clinician's practice and patient outcomes.
We will share practical tips on how to successfully use printed educational materials. First, keep the information concise; the reader's attention span is short, so ensure what they read in that short amount of time counts. Make sure all of the necessary information is available on the printed educational materials, including items such as who, what, when, where, how, and why:
* Who is the target audience for the practice change?
* What are they supposed to do and when?
* Where do they get the product, or where do they document the practice?
* How do they do it?
* Most importantly, why is the practice change needed?
In addition, use language that is clear and unambiguous, not susceptible to multiple interpretations. Keep information as simple as possible to allow rapid understanding, while highlighting the most essential elements of the activity. Next, make sure that the information is organized and easy to read; consider using no more than 2 fonts, and choose those that can be easily read. Limit the amount of full sentences or paragraphs included and instead use bullet points when able. Finally, proofread the materials carefully and consider including your contact information on the document.12 For assistance with creating printed educational materials, templates are available in Microsoft Word and PowerPoint.
EDUCATIONAL MEETINGS
Educational meetings can be categorized as either large (usually more than 25 people) or small (less than 25 people).17 In a large group educational meeting, the focus is frequently on the presentation of information. These types of meetings may be referred to as lectures, courses, or conferences. The large group meeting has the potential to reach a large group of individuals; however, it is a passive approach. Small scale educational meetings can be similar to large scale meetings, but they are usually more oriented toward individual needs and motivations.
Small scale educational meetings may be more effective than large scale meetings because of their interactive, engaging components and responsiveness to participants' learning needs and styles. If an educational meeting is used, it is helpful to try and incorporate it into existing structures used on the unit, such as through lunch and learns or journal clubs. In addition, it may be beneficial to include leadership teams in the educational meetings to help garner further support from and visibility by leadership. Forsetlund and colleagues17 found that educational meetings may slightly improve the clinician's compliance with EBPs and have a lesser impact on patient outcomes.
EDUCATIONAL OUTREACH VISITS
Educational outreach visits are a specific type of education, where a trained person visits health care professionals in their own settings in an effort to change their behavior; previously, this was known as "academic detailing" and was used heavily by the pharmaceutical industry.18 This approach allows the trained person to tailor a program to the individual needs of the care provider. Chan and colleagues19 found this strategy to be effective in improving the clinician's compliance with EBPs; however, this strategy is relatively expensive and time consuming, so the costs have to be carefully assessed.
E-LEARNING
E-learning comprises educational programs that use the internet or other information technologies.20 Such programs may have various components such as instruction, exchange in a virtual class or chat room, self-learning exercises, or videotaped lectures or demonstrations.
Educational computer games are also included in this category. This strategy is used frequently in the health care setting through online learning management platforms, as information can reach a large number of clinicians. However, Vaona and colleagues20 found that e-learning has little impact on patient outcomes or the clinician's knowledge, skills, or behaviors.
REMINDERS
The term 'reminder' refers to information (whether verbal, on paper, or on a computer screen) that has been designed to remind clinicians of a specific recommendation for evidence-based care and to allow them to take action at that specific moment.12,21,22 These can range from reminders, such as stickers that say 'wash your hands' before entering a patient room, to computer-generated reminders within the electronic health record (EHR). For example, when a medicine is prescribed, the computer may suggest an alternative medication. Many health care institutions have best practice alerts within their EHR, which is a type of electronic reminder. Pantoja and colleagues22 and Arditi and colleagues21 found moderate improvements in clinician compliance with EBPs when reminders were used as an implementation science strategy.
AUDIT AND FEEDBACK
Using an audit and feedback implementation science strategy provides clinicians feedback on the care they have delivered and possible departures from optimal practice. Many health care systems conduct audits; however, that information is only beneficial if it is fed back to end-users. Typically, audit data is collected at the unit-level directly, using the EHR internal data capture system or using a manually devised case-report form that is either paper or digital (Figure 2). Data is then fed back to units through tools such as run charts or control charts, which can provide an easy-to-read graphical display of how well the clinicians are doing with a practice. For example, hand hygiene compliance may be provided to clinicians with a run chart, with the "x" axis being time (eg, weeks or months) and the "y" axis being the compliance percentage. The graph would include the compliance target (eg, 95%) and the clinician's compliance with the EBP. With this type of display, clinicians can easily see if they are meeting the hospital's target compliance. Depending on the practice that is being audited, clinicians can develop their own audit and feedback tool using paper or through an electronic source, such as Microsoft Excel or a REDCap database. Ivers and colleagues23 provided helpful guidance, showing that audit and feedback may be most effective when the following are met:
* The clinicians are not performing well to start out with (eg, if clinicians are 100% compliant with an EBP, this strategy may not be effective).
* The person responsible for the audit and feedback is a supervisor or colleague (if someone from outside the institution provides the audit and feedback data, end-users may be skeptical about the accuracy and how the data was retrieved).
* The data should be provided more than once and over time.
* Provide the data both verbally and in writing, as this will help target different types of learners.
* Include clear targets and an action plan in the audit and feedback data.
Chan and colleagues19 noted that audit and feedback is an effective strategy to improve the clinician's compliance with EBP. Reynolds24 published a step-by-step guide to appropriately completing an audit and feedback process.
MEMES
As the field of implementation science grows, new, creative strategies have emerged. For example, Reynolds and Boyd25 implemented an infection prevention week meme contest where clinicians were encouraged to develop educational memes. They conducted a descriptive analysis to determine if memes may be a viable implementation science strategy and found that clinicians were open to using memes to improve the clinician's knowledge of and compliance with EBPs.25 Clinicians may consider using this type of creative humorous art as part of their printed educational materials.
Recommendations for Measuring Implementation Science Success Through Process Measures
Although improving a patient outcome is usually the ultimate goal, implementation science strategies directly affect process measures, such as the clinician's compliance or knowledge, attitude, and behavior/practices (KAB/KAPs) with a certain EBP. Outcome and process measures, along with balancing measures - or unintended consequences that may occur because of a practice change - should be identified before beginning an implementation science project and measured accordingly. An overview of all 3 measures is provided by the Institute for Healthcare Improvement at https://youtu.be/oCuaROuvetE.
Process Measure
The clinician's KAB/KAPs are often measured before and/or after an implementation science project through surveys, focus groups, or interviews among other things. Andrade and colleagues26 published practical guidance on how to prepare survey questions to measure the clinician's KAPs. Questions may be author-developed from reviewing the literature. Often, questions, especially those measuring attitude and behavior/practice, will use a Likert-type scale, such as 1 = strongly disagree to 4 = strongly agree. After developing the survey, authors should validate the questions through face and content validation processes. Qualitative methods, including focus groups or interviews, can also be used to gauge the clinician's KAB/KAPs of the practice change that was implemented. Reynolds and colleagues27 recently published a qualitative study using focus groups after an implementation science study to understand the clinician's experiences, including their perceptions of the feasibility and adoption of EBPs.
Often, process measures are focused on the clinician's compliance with an EBP. In a recent systematic review by Radisic and colleagues,28 most process metrics were measured via retrospective review of the EHR (ie, documentation compliance with a practice). Although this is a relatively easy metric to obtain, documentation may not accurately reflect actual practice. Clinician compliance may also be measured through direct observation of the practice.29 This process can help provide unbiased data; however, the data may be challenging to obtain. Other options to collect process data include surveys/questionnaires of self-reported compliance.30,31 Although self-reported measures may introduce bias, Reynolds et al29 found no significant difference between self-reported and observed bathing compliance, indicating that self-report may be a more feasible way to capture compliance data. Before beginning an implementation science project, the team should consider which process measures are feasible to obtain and other details regarding data collection (Who will collect the data? What is the timeframe for collecting data? Will a form be developed to collect the data?).
Case Study
Reynolds et al32 conducted an implementation science project to improve compliance with chlorhexidine gluconate (CHG) bathing practices in one neurointensive care unit (neuro-ICU). The team used the Grol and Wensing model of implementation to guide their project.12 To begin, the unit saw an increase in their central line-associated bloodstream infections (CLABSI), leading them to begin their project. Upon discussion with bedside nurses, leadership identified that compliance with CHG bathing - an effective evidence-based CLABSI prevention practice - was suboptimal. Nurses noted that they were unaware of national guidelines33 on how to complete a CHG bath and perceived bathing as a "lower priority" task. Before beginning the project, the process measure that the team decided to evaluate was CHG bathing documentation compliance. Outcome measures assessed included the nurses' knowledge and perceptions of CHG bathing and CLABSI rates. A balancing measure was not included in the project but could have included a metric such as skin irritation due to CHG bathing.
Based on the identified barriers, and with support from the literature, the following implementation science strategies were used: (1) printed educational materials, (2) educational outreach visits, and (3) audit and feedback. During the course of 1 month, tailored education was provided to the neuro-ICU nursing staff, along with weekly documentation audit data. After the implementation month, audit data continued to be fed back to the nursing staff for 6 months via run charts, and CLABSI rates were measured for 12 months. They found improvements in all measures, with an increase in CHG bathing documentation compliance, improvements in nursing knowledge and perceptions of CHG bathing, and a reduction in CLABSI rates. Sustainability efforts were put into place, with booster sessions occurring at routine intervals.
Implications
This implementation science toolkit provides helpful information for clinicians in both the clinical and academic settings or any location where improvements are being made to nursing and/or patient care. Although implementation science is different from QI and EBP, these concepts can - and should - be integrated into QI and EBP initiatives. During any type of project, an intervention is generally being implemented, and the processes inherent to that implementation always benefit from evaluation. For example, the Iowa model of EBP includes a step to "develop an implementation plan" when the practice change is being piloted.3 During this step, clinicians should incorporate implementation science methods to ensure successful translation of the EBP. Implementation science and the tools provided in this toolkit provide the specific methods to guide evaluation and promote the uptake of evidence-based interventions that are being implemented, helping improve the success of an evidence-based project. Using these tools to identify barriers to the adoption of new interventions or practices as they occur allows investigators to modify implementation processes before it is too late. Thus, clinicians should not only translate EBPs but also use evidence-based implementation science methods and strategies to improve the success of their projects.
CONCLUSION
Clinicians often lead, or take part in, QI and EBP projects. However, these projects may not be successful if a solid implementation science plan was not developed. Clinicians should identify an implementation science framework that can supplement an EBP or QI framework to have a more comprehensive plan. In addition, understanding which implementation science strategies may be most effective can help clinicians have a more robust project with greater clinical and financial impact. Finally, selecting process measures, as well as outcome and balancing measures, at the beginning of an implementation science project can assist clinicians with determining whether or not the innovation was successful. Using an implementation science toolkit for designing, conducting, and evaluating a QI or EBP project improves the quality and generalizability of results.
References