Authors

  1. Olson, DaiWai M. Editor

Article Content

Last year was no different than any before. Once more, the leading scientists driving healthcare forward examined hundreds of exciting new interventions, medications, and practice paradigms. The vast majority of the articles published concluded that newer is better. The vast majority of the articles published came to this conclusion by finding a statistically significant difference between newer and normal.

  
Figure. No caption a... - Click to enlarge in new windowFigure. No caption available.

I readily agree that statistics are useful to help us understand data. And I understand why the 2-group comparison, perhaps the most common study design used, is useful to develop the science. Certainly, most nurses who would imagine conducting research would at least consider designing a study to compare results from an intervention group with results from a control group. What I have trouble understanding is what really happened to the control group in clinical research results.

 

Although authors tend to go into great detail when describing an intervention, rarely is the reader provided with similarly significant detail about the control group. Short sentences similar to "The control group received usual care." provide limited understanding of care that may impact the outcome. As clinicians, I think we should become increasingly skeptical when reading or interpreting clinical trials that bundle the impact of nursing care under the umbrella of "usual care." A similar healthy skepticism is encouraged for studies that compare an intervention against "usual care."

 

Although, at first glance, the term "usual care" may seem a reasonable descriptor, there are 2 reasons to encourage nurses to be skeptical of this practice. First and foremost, this diminishes the relative contributions that nursing care makes toward influencing patient outcomes. Second, practice varies widely within and between countries, hospitals, departments, and even individuals.

 

As nurses, we are taught to individualize care to the needs of the patient. None of us actually repositions every patient exactly-and only-every 2 hours. Nor do we assist every patient to ambulate exactly 37.5 m twice daily exactly 6 hours apart. Rather, we continually assess the needs of the patient, and the patient's response, and we adjust accordingly. Expert nurses use specific interventions at specific times for specific reasons. The "usual care" provided by an expert nurse is anything but usual.

 

Practice variation further limits the generalizability of research that compares against usual care. This is especially true when data are abstracted from the electronic medical record given that practice variation is magnified by variation in documentation.1 Every year, there are thousands of quality improvement and clinical practice abstracts and posters presented at hundreds of nursing conferences. Many of these highlight success by hospitals that have intentionally strayed from "usual care" to vary their practice in the hopes of improving some aspect of patient care.

 

The knowledge that there is no true "usual care" should not be willfully ignored. Nor should we abandon the comparative study design. Although we strive to determine best practice, it is inherent that we periodically adjust our course. What we usually did in 1983 is not what we will usually do throughout 2019. What we should strive for is authors to provide a deeper understanding of what is really happening when they write "usual care."

 

The Editor declares no conflicts of interest.

  
Figure. No caption a... - Click to enlarge in new windowFigure. No caption available.

Reference

 

1. Keenan G, Yakel E, Dunn Lopez K, Tschannen D, Ford YB. Challenges to nurses' efforts of retrieving, documenting, and communicating patient care information. J Am Med Inform Assoc. 2013;20(2):245-251. [Context Link]