1. Lough, Mary E. PhD, RN, CNS, CCRN, CNRN, CCNS, FCCM
  2. Rice, Karen L. DNS, APRN, ACNS-BC, ANP

Article Content

One of the wonderful opportunities in the clinical nurse specialist (CNS) role is to take well-designed research study results and implement the findings as evidence-based practice (EBP). However, any experienced CNS will tell you that this is hard to accomplish. For a long time, we thought it was because we were not working hard enough at implementation strategies, or because we did not educate our staff thoroughly about the science behind a change. It did not occur to us that sometimes we were missing key facts about additional personnel hired to assist with the study, or missing other resources not available at our institution. We accepted the research results at face value and assumed that the implementation problem was a feature of our real-world clinical settings. This article will discuss the struggles that the CNS faces in reproducing research findings and explore the ethical responsibility of researchers in operationalizing reporting standards that facilitate reproducibility of findings at the point-of-care.



My (M.E.L.) initial clinical insights came about 13 years ago when we implemented "tight glucose control" in the intensive care unit (ICU) after the landmark study by Van Den Berghe et al.1 After publication of this study, the target glucose for critically ill patients was set at 80 to 110 mg/dL. This required hourly blood glucose checks and potentially hourly adjustments of the insulin infusion. In addition, we were careful to prevent hypoglycemia, always a risk with an insulin drip. The ICU nurses found it challenging to "keep up" with hourly blood glucose checks. In conversations with CNS colleagues, I learned that our ICU was not alone in trying to figure this out. Then, I had the opportunity to hear Dr Van Den Berghe speak at an international critical care meeting. I learned that in her study team, a research physician was available to write all of the insulin orders, and research nurses were available to measure the hourly blood glucose and adjust the insulin infusion as needed. The additional staffing was probably wise given the high-risk study protocol, but staffing information was not included in the publications. Therefore, when clinicians tried to implement the results as EBP, they found it difficult to achieve the same outcomes. The multisite, multinational Normoglycemia in Intensive Care Evaluation and Surviving Using Glucose Algorithm Regulation (NICE-SUGAR) study2 replicated the study method. They reported a significantly higher mortality in the intensive-insulin group compared with control, possibly related to the number of hypoglycemic episodes. The science behind the study was sound; sustained hyperglycemia does have deleterious effects in critically ill patients. We did eventually figure out how to safely implement the hourly blood glucose checks using bedside timers that sound an hourly alarm, by transitioning from a paper-based algorithm to having the algorithm built into the electronic health record and by constant audits of the electronic health record. That implementation process took over 10 years!


The nonreporting of additional personnel or materials deemed essential to conduct the study is both an ethical violation and a threat to generalizability of the findings. This lack of transparency may not detract from the main study findings. However, it does affect whether the findings can be successfully replicated or disseminated as EBP. This problem is widespread in science and is not limited to nursing.3,4 Another example concerns the early mobilization of mechanically ventilated patients in the ICU. The research literature is extremely convincing.5,6 Everyone agrees that this is a worthy goal, and yet we struggled to mobilize our sickest patients in our ICUs. We recruited nurse champions and physical therapist champions, yet we could not get our culture to change. Then we went on-site to visit hospitals that had published data about their early mobility programs. We realized that these hospitals had physical therapists assigned to each ICU, not as a consult service, but as an essential staff member. We had identified physical therapists as vital to the early mobility process but we had not delivered the correct "dose" to achieve our aims. Two hours a day was not enough time; we needed physical therapists on the unit for longer periods. We have since added a full-time physical therapist with a full-time rehabilitation aid for each ICU, and it has made a huge difference in our ability to mobilize mechanically ventilated patients. Clinicians often believe that all working environments are similar. However, subtle differences between settings can alter adaptability to new methods. Some environments require a stronger intervention than others do. If the dose of the intervention is not explicitly defined in a research publication, knowledge translation to clinical practice may not occur.7 Note that in research studies, the dose of the intervention may be much higher than standard care in clinical practice. This is one reason clinical studies are challenging to replicate or disseminate as EBP.



The National Institutes of Health (NIH) has developed a new set of standards to address issues related to rigor and reproducibility in research. According to NIH, scientific rigor includes "[horizontal ellipsis]full transparency in reporting experimental details so that others may reproduce and extend the findings."8([P]3) When details about the numbers of study personnel, their professional qualifications and their roles, and what additional resources were required to conduct the study are disclosed, this will make it easier for a CNS to decide if it is feasible to implement a finding as EBP in a clinical setting. The NIH guidelines also state: "The quality of resources used to conduct research is critical to the ability to reproduce the results. Each investigator will have to determine which resources used in their research fit these criteria and are therefore key to the proposed research."8 Researchers may be tempted to believe that their clinical research associates' work is focused only on study coordination, participant enrollment, and adherence to the protocol. However, the time devoted to the intervention, is time that in a clinical setting would be spent by the registered nurse (RN). If this amount of time is not broken out and reported, a CNS cannot fully determine how to implement an intervention successfully. Therefore, certain reporting standards about intervention fidelity9,10 are ethically essential for the CNS to transfer knowledge to the point of care. These reporting standards include (1) evidence supporting the intervention (how so?), (2) the schedule and time needed for the intervention (how often?), (3) the intervention dose (how much: 25%, 50% 75%, or 100%?), (4) adherence to the intervention (how adherent?), (5) staff skills and competencies required to complete the intervention (how complex?), and (6) the characteristics of the intervention recipient (how ill?). In a busy clinical unit, it is important to know if an intervention takes 3 minutes or 30 minutes and requires specialized skills or equipment. Needless to say, the more complex the intervention, the less likely the intervention will be translated into nursing practice unless a significant benefit can be measured. Thus, reporting standards must include sufficient information about the reliability and validity of outcome and process measures, and instruments that are pragmatic for point-of-care implementation.


In general, research reports clearly describe outcome measures and instruments. However, it is seldom that information about intervention adherence measurement and/or the processes to improve intervention fidelity are reported.11 However, this information is critical to knowledge transfer to improve quality outcomes for point-of-care implementation. In 2001, a bundle of interventions named early goal directed therapy was reported to significantly reduce mortality for patients with severe sepsis in the emergency department and critical care areas.12 However, initial published results were not reproducible until the Surviving Sepsis Campaign developed resources to measure early goal directed therapy bundle adherence.13 This is an example of successful intervention adherence measurement where the intervention (dose) was very explicitly spelled out.



John Ioannidis has published extensively about the lack of utility and lack of transparency in biomedical research. He estimates that up to 85% of research dollars are wasted each year. This represents a loss of billions of dollars.14 One of many definitions of waste is when the research findings cannot be replicated in other studies. There are many reasons for this, one being that "[horizontal ellipsis]research is not transparent, when study data, protocols, and other processes are not available for verification or for further use by others."14(p4,[P]3) Study protocols are not only for use by researchers but also can be used by clinicians who want to implement a clinically useful EBP change in practice. It is critical that research reports disseminated as publications and/or presentations include sufficient detail to foster reproducibility that transfers knowledge into action at the point-of-care. The literature suggests that significant staff and institutional barriers to implementation of EBP exist in the clinical setting, although barriers undoubtedly differ between clinical environments.15 Although local clinical environmental factors may contribute to the failure of EBP project implementation, it seems plausible that the cause is more often related to the disclosure of study-related details. Therefore, when a research study has been designed and reported in a way that makes it relevant and time sensitive to clinical practice, there is a greater chance of successful implementation and in sustaining the change over time.



Transferring scientific knowledge into action at the point-of-care is not unique to nursing; however, it is an essential role of the CNS. Hence, there are specific strategies that the CNS can practice to foster the reproducibility of research at the point-of-care. These strategies include, but are not limited to, the following practices: (1) contacting the researchers about the details of their published research reports, (2) asking challenging questions at podium and poster presentations about design and local clinical environmental contributors to study findings, and (3) including detailed descriptions of study-related procedures and clinical environmental contributors to findings in manuscripts, making a case for these details when editors request revision. Researchers have an ethical responsibility to describe all of the additional supports that were used to undertake clinical studies, especially if the later expectation is that the study methods will be implemented so that knowledge is transferred into action as EBP at the point-of-care.




1. Van den Berghe G, Wouters P, Weekers F, et al. Intensive versus conventional glucose control in critically ill patients. N Engl J Med. 2001;345:1359-1367. doi:10.1056/NEJMoa011300. [Context Link]


2. NICE-SUGAR Study Investigators. Intensive versus conventional glucose control in critically ill patients. N Engl J Med. 2009;360:1283-1297. doi:10.1056/NEJMoa0810625. [Context Link]


3. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8):e124 PMID:16060722. [Context Link]


4. Ioannidis JP. How to make more published research true. PLoS Med. 2014;11(10):e1001747. doi:10137/journal.pmed.1001747. PMID: 25334033. [Context Link]


5. Schweickert WD, Pohlman MC, Pohlman AS, et al. Early physical and occupational therapy in mechanically ventilated critically ill patients: a randomized controlled trail. Lancet. 2009;373(9678):1874-1882. doi:10.1016/S0140-6736(09)60658. PMID:1944632. [Context Link]


6. Bakhru RN, McWilliams DJ, Wiebe DJ, Spuhler VJ, Schweickert WD. ICU structure variation and implications for early mobilization practices: an international survey [published online ahead of print June 7, 2016]. Ann Am Thorac Soc. doi:10.1513/AnnalsATS.201601-078OC. PMID: 27268952. [Context Link]


7. Funabshi M, Warren S, Kwachuk GN. Knowledge exchange and knowledge translation in physical therapy and manual fields: barriers, facilitators and issues. Phys Ther Rev. 2012;17(4):227-233. doi:10.1179/1743288X12Y.0000000016. [Context Link]


8. National Institutes of Health. Rigor and reproducibility. April 21, 2016. Accessed July 14, 2016. [Context Link]


9. Bellg AJ, Borrelli B, Resnick B, et al. Enhancing treatment fidelity in health behavior change studies: best practices and recommendations from the NIH Behavior Change Consortium. Health Psychol. 2004;23(5):443-451. [Context Link]


10. Dogherty EJ, Harrison MB, Graham ID, Vandyk AD, Keeping-Burke L. Turning knowledge into action at the point-of-care: the collective experience of nurses facilitating the implementation of evidence-based practice. Worldviews Evid Based Nurs. 2013;10(3):129-139. [Context Link]


11. Slaughter SE, Hill JN, Snelgrove-Clarke E. What is the extent and quality of documentation and reporting of fidelity to implementation strategies: a scoping review. Implement Sci. 2015;10:129. doi:10.1186/s13012-015-0320-3. [Context Link]


12. Rivers E, Nguyen B, Havstad S, et al. Early goal-directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345:1368-1377. doi:10.1056/NEJMoa010307. [Context Link]


13. Society of Critical Care Medicine. Surviving Sepsis Campaign. Accessed July 14, 2016. [Context Link]


14. Ioannidis JP. Why most clinical research is not useful. PLoS Med. 2016;13(6):e1002049. doi:10137/journal.pmed.1002049. PMID:27328301. [Context Link]


15. Melnyk BM, Fineout-Overholt E, Gallagher-Ford L, Kaplan L. The state of evidence-based practice in US nurses: critical implications for nurse leaders and educators. J Nurs Adm. 2012;42(9):410-417. doi:10.1097/NNA.0b013e3182664e0a. PMID:22922750. [Context Link]