Authors

  1. Jordan, Zoe PhD

Article Content

The focus of most individuals and organizations working in the field of evidence-based healthcare (EBHC) is to operationalize evidence. Few would argue that the primary driver and ultimate objective of EBHC is to see it utilized to inform policy and practice that results in improved outcomes for patients. However, it is hard not to think of the infinity pool as a metaphor for the potentially limitless endeavour of methodological reflection, refinement and reconceptualization that many of us as evidence-based researchers seek to achieve. Diving into 'methodological infinity pools' (or down rabbit holes - you can choose whichever symbolism or imagery works best for you) can result in fresh, unexpected salience. There is no doubt that some benefit can come from considering tangential ways of thinking about some of the problems we seek to solve. Nevertheless, these tangents are not always necessarily productive or pragmatic and need to be tempered accordingly.

 

The value proposition of systematic reviews in this context has been their ability to provide rigorously reviewed and synthesized trustworthy evidence. It is therefore critical that we deliver on that promise and invest in ensuring the highest quality result for key stakeholders. To that end, methodological advances in the science and timely delivery and update of evidence syntheses continue at a rapid rate.

 

A considerable number of new methodological approaches are now appearing for different types of evidence,1 and there are also an increasing number of tools available to assist in improving reporting and methodological quality in the conduct of systematic reviews (QUOROM, PRISMA, AMSTAR, COSMIN, MOOSE, OQAQ, ROBIS).2 However, despite the existence of such tools common errors persist, including use of incomplete guidance, incorrect use of reporting tools and checklists or use of tolls in isolation without reference to methodological guidance for the conduct of reviews.3

 

Significant criticism has also been levelled at the science that currently underpins the conduct of systematic reviews and, in particular, meta-analyses. Prominent, experienced synthesis science scholars like John Ionnidis and Jos Kleijnen (among others) have been frequently cited for their observations regarding issues related to bias and subjectivity in the conduct of systematic reviews.4,5 While it is true that systematic review and meta-analysis have been criticized for as long as the methodology has existed, and some of these criticisms have been absolutely valid, developments in the field can be almost equally weighted as exciting prospects and difficult challenges across clinical, policy and research domains.

 

Scientific rigour is of course paramount to our collective endeavour of achieving an evidence-informed approach to decision making, but it is important to maintain a level of pragmatism in this regard so that we do not become too far removed from our primary objective (of operationalizing evidence) and become 'lost in synthesis science'. We need to understand the return on investment for some of this scientific work and be clear about who will benefit. Given the scarcity of resource and the scale of the task at hand, it is becoming ever more important to master the crucial skill of having the discipline of focus and to weigh the advantages and disadvantages of diving into methodological infinity pools. Going back to first principles, our main aim [in Joanna Briggs Institute (JBI)], after all, is to provide the best available evidence at any given moment to inform the policy and practice trajectory and we achieve this by using the best available science (and technology) at any given moment to ensure the evidence we synthesize is as reliable as possible.

 

The reliability and trustworthiness of evidence and its effective delivery to decision-makers in a range of health-related contexts has long been a part of the healthcare discourse and is something that organizations like JBI and G-I-N, among others internationally, have long advocated for and championed. It is over 40 years since Archie Cochrane levelled his criticism at the profession regarding the consolidation of trial results. We have indeed made significant progress since that time across the evidence-based practice trajectory. While there remain limitations to the synthesis, transfer and implementation of 'best practice', there is much to reflect upon, celebrate and build on. As Elwyn et al.6 indicate, 'progress will depend on advances in all these areas if we are to ensure that trustworthy evidence can be used collaboratively in clinical encounters'.

 

The current issue of the International Journal of Evidence Based Healthcare contains abstracts for the joint JBI/G-I-N conference, Trustworthy evidence for questions that matter, which marks a unique occasion for the evidence-based community to clarify understandings, explore trends and extract lessons from across the globe. The ways in which evidence is gathered, synthesized and consumed by end users has evolved significantly. The challenges of methodological advancement and the increasing complexity of our global health context means that our engagement with the global scientific and pragmatic discourse and harnessing the considerable expertise now available to us have never been more important. However, critical to this endeavour is the need to be intentional, pragmatic, collaborative and focussed in our approach if we are to truly create new value for health services and to improve health outcomes.

 

Acknowledgements

Conflicts of interest

The author reports no conflicts of interest.

 

References

 

1. Munn Z, Stern C, Aromataris E, Lockwood C, Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 2018; 18:5. [Context Link]

 

2. Pussegoda K, Turner L, Garritty C, et al. Systematic review adherence to methodological or reporting quality. Syst Rev 2017; 6:131. [Context Link]

 

3. Lockwood C, Oh E. Systematic reviews: guidelines, tools and checklists for authors. Nurs Health Sci 2017; 19:273-277. [Context Link]

 

4. De Vrieze J. The metawars. Science 2018; 361:1184-1188. [Context Link]

 

5. Whiting P, Savovic J, Higgins JP, Caldwell DM, Reeves BC, Shea B. ROBIS Group. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 2016; 69:225-234. [Context Link]

 

6. Elwyn G, Quinlan C, Mulley A, Agoritsas T, Vandvick PO, Guyatt G. Trustworthy guidelines - excellent; customized care tools - even better. BMC Med 2015; 13:199. [Context Link]