Today, I listened in on the Ovid Webcast, Beyond the Search: Maximizing the Quality of Systematic Reviews
. Dr. Edoardo Aromataris, PhD, Director of Synthesis Science at the Joanna Briggs Institute in Adelaide, Australia and Dr. Craig Lockwood, PhD, RN, BN, GDip, ClinNurs, MNSc, Director of Translation Science at the Joanna Briggs Institute in Adelaide, Australia successfully gave me a better understanding of systematic reviews.
Whether you are reading journal articles, completing educational requirements, or performing research yourself, it is important to be aware of the components of a comprehensive systematic review. Why? The presence of specific defining features indicates a high level of rigor in the research which helps ensure that the review is reproducible (same results) and transparent (same conclusion).
So what are these defining features of a systematic review?
A prespecified question
Defined inclusion and exclusion criteria
An extensive literature search that includes international research
Selection of studies based on the inclusion criteria
Assessment of the quality of the included studies
Extraction of the data
Analysis of the data
Presentation of the results
Interpretation of the results
Egger, M., Smith, G., & Altman, D. (2001). Systematic Reviews in Health Care: Meta-analysis in context. London: BMJ Publishing Group.
Glasziou, P., & et al. (2004). Systematic Reviews in Health Care: A Practical Guide. Cambridge: Cambridge University Press.
Posted by Lisa Morris Bonsall on 3/12/2013 1:34:52 PM
The debate over standardization of nursing uniforms is well-documented, however, the existence of rigorous, well-designed studies is lacking. In the latest issue of JONA, Journal of Nursing Administration,
an integrative review
examining the professional appearance of RNs examines the evidence. While the strength of the evidence is low, it is essential for us to recognize the importance of patients being able to identify us as nurses and to understand how our attire impacts the public’s perception of our knowledge and skills.
Seven studies were included in this review and a nice table comparing each of the studies can be found in this supplemental digital content
. One study found that among nurses, students, and patients, solid color scrubs reflect more skills and knowledge than print scrubs or T-shirt tops. Another study, which looked at uniform color preference among patients, found blue or white to be most preferred, while red was least preferred. Take a close look at this table to learn more about how both patients and nurses feel that uniform and general appearance impact perception. It’s pretty interesting.
Is there a standard uniform for nursing staff where you work?
Cassidy, C., Del Guidice, M., Hatfield, L., Pearce, M., Polomano, R., Samoyan, J. (2013). The Professional Appearance of Registered Nurses: An Integrative Review of Peer-Refereed Studies. JONA, Journal of Nursing Administration, 42(2).
Posted by Lisa Morris Bonsall on 2/10/2013 7:50:36 AM
Systematic reviews, especially with meta-analysis, are often considered scholarly works at the top of the pyramid (See the Oxford Centre for Evidence-Based Medicine for example.) This is because they typically combine randomized controlled trials (RCTs) that may be limited by small sample sizes, enabling stronger conclusions to be better derived. Yet, this seemingly golden offering of scholarly literature may have its limitations.
A systematic review by Boyd, Quigley, and Brocklehurst (2007) on donor breast milk for preterm infants compared to formula is a frequently quoted reference on the subject. The seven studies examined included five randomized controlled trials. The section of the review that receives the most ongoing attention in the literature is the combined effects of three of the studies by meta-analysis on the variable of confirmed necrotizing enterocolitis (NEC), a complication of high concern in premature infants. This analysis combined two RCTs and one observational study. Separately, the sample sizes for these studies ranged from 39 to 162; the combined sample size became 268. Individual study results did not meet the minimal level of statistical significance of p=0.05. Combined evidence from these studies created a relative risk (RR) of 0.21, 95% CI of 0.06-0.76, p=0.017. The conclusion was that donor milk reduces NEC by about 79% compared to formula. At face value, this is an enticing result. Why the worry?
The concerns with this analysis are partially acknowledged by the authors. The articles used for this study are dated no later than the early 1980’s with data from the 1970’s and the beginning of the 1980’s. Babies included in the studies were 30-33 weeks of age and 1310-1954g, much larger than the premature infant surviving today in our neonatal intensive care units. These studies involved non-fortified milk and exclusive feeding of the control and treatment groups. This too is contrary to today’s practices as fortification is much more the standard practice now.
One of the authors (Quigley) went on to perform an updated review. This review (Quigley, Henderson, Anthony, McGuire, 2007) is published in the Cochrane Database of Systematic Reviews, a much revered source of scholarly literature. Here five studies were combined in meta-analysis with the addition of a more recent study (Schanler, 2005) of sizeable impact. Unlike several of the other comparisons in this document, the heterogeneity is assessed to be low (I2 of 0.00%) and results favored donor breast milk, with a RR of 2.46, 95% CI 1.19-5.08, p=0.015. This confidence interval was much better than some of the singular studies that reported such wide variances as 0.11 to 60.38. Yet, more limitations exist in this review. Growth restricted preterm infants that are already at high risk for NEC were noted here as excluded. Again, many of the studies came from the pool described in the article by Boyd and colleagues and infant size and age along with the fortification issue remain. Preparation of the donor milk may also have differed in the early studies compared to today. In summary, the golden scholarly product is tarnished.
A final note is that evidence reviews cannot end with the statistical analysis. Donor milk costs $3.50 per ounce or more via standard milk banks. Cost effectiveness needs to be evaluated in order to make this costly recommendation. Given the limitations of the literature and the cost involved, a local team of experts decided against widespread adoption of donor breast milk for premature infants.
Boyd, C. A., Quigley, M. A., & Brocklehurst, P. (2007). Donor breast milk versus infant formula for preterm infants: Systematic review and meta-analysis. Archives of Diseases in Childhood, Fetal and Neonatal Edition, 92, F169-F175. doi: 10.1136/adc.2005.089490
Quigley, M., Henerson, G., Anthony, M. Y., McGuire, W. (2008). Formula milk versus donor breast milk for feeding preterm or low birth weight infants. Cochrane Database of Systematic Review 2007, Issue 4. Art. No.: CD002971. doi: 10.1002/14651858.CD002971, pub2
Kathy Russell-Babin, MSN, RN, ACNS-BC, NEA-BC
Sr. Manager, Institute for Evidence-Based Care
Meridian Health System
Posted by Lisa Morris Bonsall on 9/12/2012 8:13:30 AM
Recently I had the pleasure of attending Nursing2012
Symposium in Orlando, Florida. One of the sessions, titled Faculty-Guided Poster Tour: Ask the Experts,
was a highlight for me. This session was exactly what the title implies; an informal tour of the posters being presented at the conference. Three experts – Frank Myers, MA, CIC; Cheryl Dumont, PhD, RN; and Anne Dabrow Woods, MSN, RN, CRNP, ANP-BC – led the session which was held right in the exhibit hall where the posters were displayed. Frank Myers who critiqued each presentation first, initially broke the ice by sharing that he’s taken about 15 research courses throughout his career and education and asked “What does that make me?” While I thought “an expert,” “amazing,” and “impressive,” he answered for us all by saying “Boring!” It certainly was a fun and interactive session!
The leaders shared their reactions and feedback on 6 of the posters. They pointed out key features of the posters themselves as well as the research being presented. It was helpful to get tips about what a poster should look like, what the elements should be, and a little bit more of the intricacies of research and evidence. Here are some of the things that I learned and I hope that you find them useful too!
The poster should…
Be visually attractive.
Be about 1/3 pictures and/or graphs.
Have about 20% white space.
Be legible from 3-4 feet away.
Be organized so that the content flows in a logical manner.
Include your references.
Regarding the research…
Be clear about what you are testing.
Make sure you have a good reason to do the research.
Get approval from the Internal Review Board (IRB) if needed.
Understand the difference between an observation study and an intervention study.
When using graphs to show your data, note the intervention period on the graph.
When considering endpoints, pay attention to other fields or disciplines.
Know what the “popcorn effect” is – remember that during the first weeks of an intervention, people are more likely to like it and perform it.
Use rate (for example, amount/1000 patient days) rather than just a number when reporting results.
Understand the difference between statistical significance and clinical significance.
Compare mean and median to balance outliers. It’s generally okay to discard outliers when they are 2 standard deviations from median or when you disclose that you’ve done so (ask yourself if patient who is an outlier matches your patient population).
With regard to sample size, it should never be smaller than 30 and more than 1,500 won’t impact your findings. The more covariants you have, the bigger your sample size needs to be.
Anytime something “jumps” out, such as a peak or downward trend, explain it.
Spell out acronyms with first use.
Remember your audience; not everyone is an expert in statistical analysis.
Don’t cut and paste from statistical analysis programs; create new tables and graphs.
Supplement your poster with print copies and also copies of any tools you developed for the intervention.
Include information about the financial impact of your intervention to “sell” it to administration.
Be savvy with terminology – use “cost avoidance” rather than “cost savings.”
Poster presentations can be used as a “stepping stone” to publication. Consider turning your research into a poster and presenting it at an appropriate conference. It’s a wonderful way to get feedback from your peers which you can then incorporate into a manuscript.
Posted by Lisa Morris Bonsall on 5/7/2012 10:35:21 PM
This blog post is reposted from NursingCenter's In the Round.
During my days of nursing school and research classes, we did literature reviews
to determine relevant research surrounding a topic of interest. While we did learn about ensuring that studies in our literature reviews were solid, with appropriate sample, design, methods, etc., we didn’t actually compare the findings from the studies with the same intensity that we do today.
A recent webinar about evidence-based practice (EBP) really cleared up some concepts and terms for me, including the importance of using systematic reviews
when examining evidence. A systematic review is an essential component for basing change in practice on current evidence. So how does a systematic review differ from a literature review?
- Peer review is a critical part of the process. A systematic review looks at evidence reported in peer-reviewed journals and the systematic review itself is peer-reviewed.
- The evidence is rigorously reviewed, using the same manner and standards that were used to produce the evidence.
We know that changing practice based on one research study is not enough. It’s not even enough to change nursing practice based on several studies. Available evidence must be investigated and interpreted using scientific review methods. A well-conducted systematic review summarizes existing research, defines the boundaries of what is known and what is not known, and helps resolve inconsistencies among diverse pieces of research evidence (Duffy, 2005).
Here’s a good example of a systematic review from the October issue of American Journal of Nursing.
As you read Deactivation of ICDs at the End of Life: A Systematic Review of Clinical Practices and Provider and Patient Attitudes
, pay particular attention to Table 1
where the sample, methods, and findings of each study are summarized.
Duffy, M. (2005). Using Research to Advance Nursing Practice: Systematic Reviews: Their Role and Contribution to Evidence-based Practice. Clinical Nurse Specialist: The Journal for Advanced Nursing Practice, 15-17.
Woods, A. (2011). Implementing Evidence Into Practice. Webinar. Philadelphia: Lippincott Williams & Wilkins.
Posted by Lisa Morris Bonsall on 11/16/2011 8:46:50 PM