Keywords

evidence-based practice, guideline development, quality of guidelines, rapid reviews, systematic reviews

 

Authors

  1. De Buck, Emmy
  2. Pauwels, Nele S.
  3. Dieltjens, Tessa
  4. Vandekerckhove, Philippe

ABSTRACT

Aim: As part of its strategy Belgian Red Cross-Flanders underpins all its activities with evidence-based guidelines and systematic reviews. The aim of this publication is to describe in detail the methodology used to achieve this goal within an action-oriented organisation, in a timely and cost-effective way.

 

Methods: To demonstrate transparency in our methods, we wrote a methodological charter describing the way in which we develop evidence-based materials to support our activities. Criteria were drawn up for deciding on project priority and the choice of different types of projects (scoping reviews, systematic reviews and evidence-based guidelines).

 

Results: While searching for rigorous and realistically attainable methodological standards, we encountered a wide variety in terminology and methodology used in the field of evidence-based practice. Terminologies currently being used by different organisations and institutions include systematic reviews, systematic literature searches, evidence-based guidelines, rapid reviews, pragmatic systematic reviews, and rapid response service. It is not always clear what the definition and methodology is behind these terms and whether they are used consistently. We therefore describe the terminology and methodology used by Belgian Red Cross-Flanders; criteria for making methodological choices and details on the methodology we use are given.

 

Conclusion: In our search for an appropriate methodology, taking into account time and resource constraints, we encountered an enormous variety of methodological approaches and terminology used for evidence-based materials. In light of this, we recommend that authors of evidence-based guidelines and reviews are transparent and clear about the methodology used. To be transparent about our approach, we developed a methodological charter. This charter may inspire other organisations that want to use evidence-based methodology to support their activities.

 

Article Content

Introduction

Belgian Red Cross-Flanders (BRC-F) is active at home and abroad in many different fields: from blood supply to emergency aid. In 2005, BRC-F spearheaded an initiative (European First Aid Manual), together with a group of European experts, to update training for basic first responders according to the best available medical and scientific data. Using evidence-based methodology, we identified effective interventions and also interventions that were outdated, ineffective or even harmful. This led to the publication of validated European first aid guidelines1 and an accompanying user manual. Following this project, it became part of our strategy to support all BRC-F programmes with evidence-based practice by developing evidence-based recommendations and practice guidelines. For many of the interventions and activities conducted in all fields of Red Cross activity, for example in the field of disaster management, there are no systematic reviews or evidence-based guidelines available yet. Therefore, a Centre for Evidence-Based Practice (CEBaP) was founded, with the task of developing practice guidelines and systematic reviews that answer questions relevant to our organisation. This centre is directed by a Steering Committee, composed of the operational managers of the different Red Cross services and chaired by the Chief Executive Officer/Secretary General. The Steering Committee determines the priority of projects according to fixed criteria.

 

It is generally known that guideline development requires a lot of effort and money.2 To create trustworthy guidelines in a timely and cost-effective way, we developed a methodology for an action-oriented organisation that needs to balance a quick response to a need with high-quality work. More details on the types of projects and the terminology and methodology are given in the following sections.

 

Methods

We created a charter in which we describe our approach to developing evidence-based guidelines, versus systematic reviews, in a timely and cost-effective way, based on existing methodologies. To obtain an overview of existing methodologies used to develop different types of reviews and guidelines, we consulted the following sources: the Appraisal of Guidelines for Research and Evaluation (AGREE) checklist,3 the Cochrane Handbook for Systematic Reviews of Interventions,4 guideline manuals of well-known guideline developers such as the Scottish Intercollegiate Guidelines Network (SIGN; http://www.sign.ac.uk/methodology/index.html, accessed 2 September 2013) or the National Institute for Health and Clinical Excellence (NICE; http://publications.nice.org.uk/the-guidelines-manual-pmg6/reviewing-the-evidenc, accessed 2 September 2013), international conferences about the evidence-based methodology such as the Cochrane Colloquium and the Guidelines International Network (GIN) conference, and personal conversations with methodologists. In addition, we performed a MEDLINE search of the last 10 years (via the PubMed interface; search last updated on 30 June 2013), using search terms such as 'Practice Guidelines as Topic'[Mesh], 'Review Literature as Topic'[Mesh], 'Evidence-Based Practice'[Mesh], 'rapid review', 'scoping review', 'pragmatic review' and 'practice guideline'. We selected the articles that gave a better view of the various methodologies and terminologies being used in the development of evidence-based end products. Reference lists and related citations of relevant articles were also checked. The information we collected was synthesised in a narrative way and used as a basis for the development of our own methodology, which is described in the following sections.

 

Semantics and quality in evidence-based practice

The success of evidence-based practice has led to a rise in review studies.5 Grant and Booth identified 14 commonly published types of reviews, including literature reviews, systematic reviews, systematic searches, rapid reviews, and scoping reviews. These different types of reviews all have subtle variations in purpose (e.g. scoping, giving a rapid answer, etc.), methodology (systematic versus nonsystematic, quantitative versus qualitative or mixed, type of primary research that is being considered, etc.) and the type of question dealt with (e.g. 'What is the impact/cost of an intervention?', 'What is the effect of an approach to social policy?', etc.). Their value is therefore not always clear to the reader.6-8 Furthermore, reporting results can occur in a narrative (often called a 'narrative review') or systematic (e.g. tabular) way. All these types of reviews can form the scientific basis of guidelines, and consequently guidelines also differ in quality.

 

While developing this methodological charter, we encountered and struggled with these linguistic and methodological problems. In the following sections, we discussed in more detail on how an action-oriented organisation such as the Red Cross deals with these problems by using three categories of evidence-based materials ('rapid reviews', 'systematic reviews' and 'guidelines').

 

Rapid reviews

Decision makers sometimes need a quick answer to a particular question. As a consequence, there are currently a variety of rapid review methodologies that are all originally derived from the systematic review methodology. However, it often remains unclear which part of the rapid review is carried out more rapidly than a systematic review.9 Products developed using this kind of methodology have different names, such as 'rapid reviews', 'pragmatic systematic reviews', 'scoping reviews', 'rapid responses', 'evidence summaries', 'evidence maps', 'scoping studies', and so on.6,9-16 The 'rapid review' terminology and methodology is widely used among health technology assessment organisations to deliver evidence to decision makers in a shortened time frame, typically 1-6 months (as opposed to 1-2 years for a systematic review).9-11,14 Additionally, BestBETs or 'Best Evidence Topics' offers a database of 'pragmatic systematic reviews' for clinical practice because clinicians also need quick answers (http://bestbets.org/, accessed 2 September 2013). Also, the Cochrane Collaboration has developed 'Cochrane response rapid reviews' (http://innovations.cochrane.org/response, accessed 2 September 2013). Based on a survey among health technology assessment organisations, it was observed that systematic reviews were always included in their rapid reviews, and randomised and nonrandomised trials were included in 94 and 83% of their rapid reports, respectively. In 75% of the reviews, the quality of the evidence was assessed and in 67% of reviews an expert panel was involved.11 A more recent study of 49 rapid reviews addressed some other methodological aspects: it was observed that 47% of the rapid reviews did not have a clear research question; 61% of the reviews were developed by two reviewers; 67% searched the following three databases MEDLINE/Embase/Central; 69% reported the full search strategy; 47% reported the quality assessment method used and 88% presented the results in summary data tables.9 It is clear from both surveys that a huge variety in methodology exists among rapid reviews, which is mainly a consequence of the fact that, until today, there has been no clear guidance for authors of rapid reviews. As a rapid review methodology could introduce bias, it should as a minimum be recommended to rapid review authors that the potential limitations of this type of reviews are provided.9

 

In general, BRC-F uses the rapid review methodology to explore a possible new topic, after a need is identified in the field by one of the operational Red Cross services. It is therefore called a 'scoping review'. To define the research question as accurately as possible, the input of the operational service is included. The aim of a scoping review is to get an initial idea of the content, quantity and quality of the available evidence; it is only used as an internal document, to prepare a systematic review or guideline project. The scoping review is performed using a specific search strategy in at least two databases (The Cochrane Library, MEDLINE), based on the methodological framework proposed by Arksey and O'Malley16 and Levac et al.15 After finalising the scoping review, the CEBaP Steering Committee decides whether the systematic review or guideline project will be initiated. If no new project is initiated, the result of the scoping review is used only to support internal decision making. The decision to follow up the scoping review is based on the following criteria: urgency, potential impact (i.e. impact on practice and society, opportunity for a publication, intellectual property and quality of the body of evidence), economic and financial impact on BRC-F and relevance for BRC-F (does it fit into our core business, in our strategic plan?). The same criteria are also used to prioritise projects, in case there are more project requests than CEBaP can handle. Figure 1 illustrates the workflow used to choose between the different types of projects.

  
Figure 1 - Click to enlarge in new windowFigure 1. Workflow to decide on the type of project and illustrating the differences in methodology between the different types of projects in Belgian Red Cross-Flanders (BRC-F). AGREE, Appraisal of Guidelines for Research and Evaluation; CEBaP, Centre for Evidence-Based Practice; PICO, Population-Intervention-Comparison-Outcome.

Systematic reviews

Systematic reviews have been developed to answer questions about the effect of interventions; they give a systematic documented overview of the available evidence on a given topic. A systematic review literally means 'performing a literature review in a systematic way', and according to the Shorter Oxford English Dictionary 'systematic' means 'arranged or conducted according to a system, plan, or organised method; involving or observing a system'.17 The Cochrane Collaboration uses the strictest methodological criteria for the development of systematic reviews and defines a systematic review as 'a review of a clearly formulated question that uses systematic and explicit methods to identify, select and critically appraise relevant research, and to collect and analyse data from the studies that are included in the review'.4 These systematic and explicit methods are clearly described in the Cochrane Handbook.4 However in reality, there is no single systematic literature review method, and many variations and gradations are used when performing a systematic literature review. This can sometimes lead to different reviews on the same topic coming to different conclusions.18 It is therefore highly recommended to be transparent about decisions made during the development of a systematic review, for example about what evidence is included in the review as 'best evidence'.18

 

In BRC-F, a systematic review will be developed after a scoping review, using the methodological principles of Cochrane, if we want to use the systematic review for a policy change, the answer to the question is not urgent, there is a real chance that it will result in a peer-reviewed publication or the quality of the body of evidence is moderate to high. Examples of such BRC-F projects are a systematic review about the effect of nonresuscitative first aid training,19 a systematic review about the safety and effectiveness of blood of hemochromatosis patients as donor blood20 and a systematic review investigating the scientific basis behind the blood type diet.21

 

Guidelines

The US Institute of Medicine (IOM) defines clinical practice guidelines as 'systematically developed statements to assist practitioner and patient decisions about appropriate health care for specific clinical circumstances'.22 In an updated definition from 2011, it is stated that 'clinical practice guidelines are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options'.23 More broadly, and not limited to a clinical context, terminologies such as 'best practice guidelines', 'practice guidelines' or 'guidelines' are being used. Terms such as 'guidance' or 'guide' are currently used in reports that aim to give advice, rather than statements of best practice, or to provide a model on how to deal with particular situations. Guidelines can be developed in several ways and, in the past, guidelines were often developed according to the so-called Good Old Boys Sat Around the Table method, mainly based on the knowledge, opinion and received wisdom of experts rather than on evidence collected from a systematic literature review. Guidelines developed in this way may be biased by undeclared conflicts of interest, lack of or outdated knowledge.24 A more formal way to develop guidelines is through a meeting of experts who define 'consensus-based guidelines' using a formal consensus technique. However, this is still not based on current scientific evidence and thus subject to different sources of bias.24 In contrast to consensus-based guidelines, 'evidence-based guidelines' are based on the best available evidence. The AGREE website states that 'Practice guidelines are evidence-based if they undertake a review of the literature and link their concluding recommendations to the evidentiary base identified through the literature search' (http://www.agreetrust.org/resource-centre/practice-guidelines/, accessed 10 October 2013). Some evidence-based guidelines use existing systematic reviews, whereas others are based on new systematic reviews, or both.25 Evidence-based guidelines are generally considered to produce more valid recommendations because they systematically integrate the scientific evidence.22,26 However, expert opinion is still necessary as inevitable gaps in the research for many questions still exist,27,28 and a judgement (on benefits, harms, preferences, costs) is needed to formulate recommendations.29 An expert opinion should be formulated in a way that prevents bias.30 It is therefore important that guidelines are developed by a multidisciplinary group and that panel members do not have conflicts of interest.24,31 The AGREE II checklist is a tool that provides a methodological strategy for the development of guidelines. Other organisations have also proposed standards for guideline developers3: GIN proposed minimum standards for high-quality guidelines,32 and IOM developed standards for trustworthiness for clinical practice guidelines.23,33 Guideline developing groups are increasingly striving for a better quality and more uniform methodology. For example, SIGN and NICE adhere to the AGREE principles, and since 2013 and 2009, respectively, use Grading of Recommendations Assessment, Development and Evaluation (GRADE) as a tool for determining the level of evidence and the strength of recommendations (http://www.sign.ac.uk/methodology/index.html, accessed 2 September 2013; http://publications.nice.org.uk/the-guidelines-manual-pmg6/reviewing-the-evidenc, accessed 2 September 2013).34 The National Guideline Clearinghouse recently formulated more stringent inclusion criteria for accepting guidelines in its database from June 2014 onwards (http://www.guideline.gov/about/inclusion-criteria.aspx, accessed 2 September 2013), and GIN asks guideline developers uploading guidelines in the GIN database to indicate which guideline standards have been met (http://www.g-i-n.net/library/international-guidelines-library, accessed 2 September 2013). All these measures are particularly important because many guidelines do not meet quality standards, as illustrated by the following studies. Giannakakis et al.35 found that of 40 guidelines published in six influential medical journals in 1999, only 12.5% performed a systematic literature review and pertinent randomised controlled trials were often not included. A study from 2000 reported that 67% of 461 guidelines published between 1988 and 1989 did not describe the professionals and stakeholders involved, 88% gave no information on the search strategy and 82% did not provide grades of recommendations.31 An overview of studies that performed quality assessments of 627 guidelines published since 1980 demonstrated that many guidelines are of low quality and that only half of the guidelines (55%, 168 of a subsample of 270 guidelines) could be recommended or could be recommended with provisos, after an evaluation with the AGREE instrument.36 The GIN also recognises that many guidelines do not meet basic quality criteria,32 and poor adherence to the IOM trustworthiness standards was demonstrated.33

 

BRC-F uses AGREE II for the development of practice guidelines. This checklist recommends a systematic search of the literature. However, as we make compromises between the number of topics, on the one hand, and a reasonable time span for the development of the practice guideline, on the other, this results in a review that is systematic but less rigorous than a Cochrane systematic review. The main differences are a specific search strategy instead of a sensitive search strategy, one reviewer instead of two reviewers and not searching for grey literature. However, for guideline development, additional expert opinion from a multidisciplinary expert panel and the preferences of the target group are taken into account, and practical recommendations are being formulated. The methodological principles for guideline development used by BRC-F are described in detail in the following sections.

 

Examples of BRC-F projects developed for the First Aid Service are European first aid guidelines,1 African first aid guidelines,37 evidence-based recommendations on automated external defibrillator training for children,38 guidelines for first aid and prevention of sports injuries. Examples of projects developed for the Social Service are evidence-based recommendations about effective interventions to support vulnerable children at school and evidence-based recommendations about effective interventions to decrease loneliness in the elderly.

 

An overview of the different criteria used either for guideline development or the development of a systematic review is given in Table 1.

  
Table 1 - Click to enlarge in new windowTable 1 Table of methodological principles of guideline development versus systematic review development as used by Belgian Red Cross-Flanders

Methodology used by an action-oriented organisation

Development of evidence-based practice guidelines

The methodology used to develop an evidence-based practice guideline by BRC-F is based on AGREE II, a framework in which the potential biases of guideline development have been adequately addressed.11 In the following sections we comment on how we address several topics of the AGREE tool.

 

First of all the scope and purpose of the guideline is described, in which the target population is clearly described, in the case of the Red Cross often consisting of laypeople. To work in a pragmatic way we decided not to start a systematic literature search when the Population-Intervention-Comparison-Outcome (PICO) question concerns a 'good practice point' or common sense, the responsibility of professionals (such as a medical doctor or pharmacist, in case our guidelines are intended to be used by laypeople), the practical organisation of activities, medicolegal aspects and anatomy or physiology.

 

All relevant stakeholders are represented in the guideline development group, which consists of members of the Steering Committee, methodological experts who are responsible for collecting and critically appraising the evidence, representatives of the operational Red Cross service for whom the guideline is being developed and which is responsible for formulating the draft recommendations, and the expert panel, which makes a trade-off between the quality of the evidence and the potential benefits and harm, and which validates the final recommendations. The expert panel consists of a chairman, with expertise in evidence-based methodology and the project content, and additional panel members who at the very least have expertise in the content of the project. The target population is represented in the guideline development group, for example by involving Red Cross volunteers. Additionally, the guideline development group receives information about the views and preferences of the target population from the Red Cross service involved, which has expertise in the content or collects the necessary information (e.g. by composing a reading group or by interviewing the target population), and/or a literature search concerning the values, preferences and experiences of the target population, and/or a feedback session or pilot test.

 

In AGREE II no detailed description of the methodology for the literature search is given. We therefore based our methodology on that used by other guideline developers such as SIGN (http://www.sign.ac.uk/methodology/index.html, accessed 2 September 2013) and NICE (http://publications.nice.org.uk/the-guidelines-manual-pmg6/reviewing-the-evidenc, accessed 2 September 2013). The search process takes into account of the fact that a BRC-F practice guideline consists of many different topics (>40 topics), and we therefore have to make methodological trade-offs that preserve the validity and trustworthiness of guidelines while improving efficiency. For each project, a search is performed for evidence from the date of inception of the databases until the date of the current search. The different sources searched and information on the methodological search filters (only used if necessary) are given in Table 2.39-41 For the choice of search terms, we focus on possible synonyms and, if present in the database, we consult the thesaurus of index terms, to build an adequate search strategy.

  
Table 2 - Click to enlarge in new windowTable 2 Overview of sources of literature, methodological filters and methodological inclusion criteria for each study type used by Belgian Red Cross-Flanders to search for evidence when developing practice guidelines

In the search process, evidence is selected in a stepwise approach, whereby we first search for guidelines and systematic reviews (as a source of individual studies), then for intervention studies and finally for observational studies. We only move to the next step of the search process if no evidence is found or if the evidence cannot be included based on the inclusion and exclusion criteria. For guidelines and systematic reviews, we run a supplemental search for individual studies from the date when the search was stopped in the selected guideline or systematic review. During the search for evidence additional references can be selected by checking the 20 related citations in PubMed and/or by manual searching (i.e. by checking the reference list of an included reference).

 

The selection of evidence is based on the language (in general English, Dutch, French and German literature is selected), criteria on the content (general criteria, which are also used to decide not to start a search for evidence, and specific criteria based on the PICO question) and methodological criteria depending on the type of study design (described in Table 2). For determining the study design, we use a flowchart that we developed based on a tool from the Cochrane Non-Randomised Studies Methods Group42 (Fig. 2). Only studies that are relevant for our projects are included, and a clear distinction is made between experimental and observational studies, which is important for assessing the strengths and limitations of the body of evidence with the GRADE methodology.34 For each topic, one reviewer selects and evaluates the evidence and then describes the search strategy, inclusion and exclusion criteria, data and levels of evidence in an 'evidence summary'. As an internal control the search for evidence for a random selection of questions is performed periodically by another methodological expert.

  
Figure 2 - Click to enlarge in new windowFigure 2. Flowchart for the determination of study types that are generally included in the practice guidelines of Belgian Red Cross-Flanders. Exp, experimental study; Obs, observational study.

A multidisciplinary expert panel is involved in formulating the final recommendations by consensus, making use of a table in which the corresponding evidence is presented for every draft recommendation. If a consensus cannot be reached, the decision depends on the opinion of the majority by voting. The expert panel is responsible for reading through the whole guideline and for assigning the grades of recommendation. The expert panel also makes a trade-off between benefits and harm, side-effects or risks during the assignment of the grades of recommendation, using the GRADE approach.34 As our target population consists largely of laypeople, it was decided not to use the grades of recommendations in the didactical materials with the final recommendations, or to translate the grade of recommendation in the specific wording of the recommendations.43

 

Each guideline is also reviewed by external experts or peer reviewers who were not involved in the guideline development group. Reviewers include experts in the content of the guideline and some methodological experts.

 

Depending on the type of project, context and target group, we can decide to complement the practice guideline with an implementation guide. This implementation guide can contain the following information: the facilitators and barriers to the application of the guideline, advice on how to put the recommendations into practice, the potential resource implications of applying the recommendations and monitoring and/or auditing criteria.

 

When publishing the practice guideline, the topics mentioned above are preferably described in detail. In every case the methodology is described or reference is made to a document containing the detailed methodology.

 

BRC-F guidelines will be updated every 5 years,44 unless stated otherwise. To achieve this, the literature search will be repeated from the end of the previous literature search until the start of the update.

 

Development of systematic reviews

A systematic review provides an overview of the best available evidence collected by a literature search on a very specific topic described by a clearly formulated question. Making a trade-off between the estimated benefits, harm and the estimated costs, and thus making specific recommendations for an action, goes beyond the scope of a systematic review and is typically the task of (clinical practice) guideline developers. If the systematic review deals with questions that are relevant for our evidence-based guidelines, the results from the systematic reviews are included in the guideline when the guideline is updated. For the development of a systematic review we follow the methodology described in the Cochrane Handbook.4 In the following paragraphs, some of the differences with the search process as described for practice guideline development will be highlighted.

 

The types of studies to be included as the source of evidence are clearly specified. In making this choice we consider a priori which study designs are likely to provide reliable data with which to address the objectives of the review.

 

We use a very sensitive search strategy and try to avoid search filters. In the case of methodological filters, the sensitive filters of Cochrane are used. Study selection and data extraction are performed by at least two independent reviewers. A clear procedure for action is described in case of disagreement between the two reviewers, and consists of consulting a third reviewer. Wherever possible, the authors of studies are contacted when information in the study is missing. To assess the quality and the risk of bias for each individual study, we use the 'Cochrane Collaboration's tool for assessing risk of bias'. For the body of evidence a quality rating is compiled for each outcome according to the GRADE method.34

 

For transparent reporting of the development of a systematic review, we use the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements 2009.45 This is a 27-item checklist that aims to guarantee the quality of systematic reviews by clear and transparent reporting in a publication.

 

Conclusion

Like any Red Cross organisation, we have to meet the needs of the most vulnerable in our society. To achieve our goals in a quality-oriented manner, we adopted an evidence-based approach to ensure that all our activities are supported by solid scientific data. As the Red Cross is a humanitarian organisation, we often have to compromise between working rigorously, on the one hand, and meeting the needs in a reasonable time span, on the other. Therefore, there is a need for a specific methodology to create practice guidelines and systematic reviews. In our search for an adequate methodology, we encountered an enormous variety of methodological approaches and terminology used for evidence-based guidelines and reviews. To be transparent about our methodology, we developed a methodological charter to be published on our website. This charter may inspire other organisations who want to use the evidence-based methodology to support their activities, and who struggle with similar issues. For users of evidence-based guidelines and systematic reviews it is important to be aware of the variety in methodology and quality, and it is recommended that as a minimum the rigour of development is verified.

 

Acknowledgements

The information in this article has not been published or submitted for publication elsewhere. All authors have contributed significantly to this work (P.V.D.K. contributed to the conception and design of the manuscript; P.V.D.K., E.D.B., N.S.P. and T.D. developed the methodology described in the article; E.D.B. and N.S.P. wrote the methodological charter and T.D. formulated feedback and contributed to the development of appendices to the charter; E.D.B. prepared the draft of the article (all other authors revised critically), and all authors are in agreement with the content of the manuscript. All authors are employees at the BRC-F and receive no other funding.

 

References

 

1. Van de Velde S, Broos P, Van BM, et al. European first aid guidelines. Resuscitation 2007; 72:240-251. [Context Link]

 

2. Burgers JS, Grol R, Klazinga NS, et al. Towards evidence-based clinical practice: an international survey of 18 clinical guideline programs. Int J Qual Health Care 2003; 15:31-45. [Context Link]

 

3. Brouwers MC, Kho ME, Browman GP, et al. AGREE II: advancing guideline development, reporting and evaluation in health care. CMAJ 2010; 182:E839-E842. [Context Link]

 

4. The Cochrane Collaboration, Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions. Version 5.1.0 [updated March 2011]. 2011; http://www.cochrane-handbook.org. [Context Link]

 

5. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med 2010; 7:e1000326. [Context Link]

 

6. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J 2009; 26:91-108. [Context Link]

 

7. Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Syst Rev 2012; 1:28. [Context Link]

 

8. Kastner M, Tricco AC, Soobiah C, et al. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol 2012; 12:114. [Context Link]

 

9. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in health technology assessments. Int J Evid Based Healthc 2012; 10:397-410. [Context Link]

 

10. Watt A, Cameron A, Sturm L, et al. Rapid versus full systematic reviews: validity in clinical practice? ANZ J Surg 2008; 78:1037-1040. [Context Link]

 

11. Watt A, Cameron A, Sturm L, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care 2008; 24:133-139. [Context Link]

 

12. Khangura S, Konnyu K, Cushman R, et al. Evidence summaries: the evolution of a rapid review approach. Syst Rev 2012; 1:10. [Context Link]

 

13. Parkhill AF, Clavisi O, Pattuwage L, et al. Searches for evidence mapping: effective, shorter, cheaper. J Med Libr Assoc 2011; 99:157-160. [Context Link]

 

14. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci 2010; 5:56. [Context Link]

 

15. Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci 2010; 5:69. [Context Link]

 

16. Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol 2005; 8:19-32. [Context Link]

 

17. The Shorter Oxford English Dictionary. Volume II. Oxford:Clarendon Press; 1985. [Context Link]

 

18. Treadwell JR, Singh S, Talati R, et al. A framework for best evidence approaches can improve the transparency of systematic reviews. J Clin Epidemiol 2012; 65:1159-1162. [Context Link]

 

19. Van de Velde S, Heselmans A, Roex A, et al. Effectiveness of nonresuscitative first aid training in laypersons: a systematic review. Ann Emerg Med 2009; 54:447-457. [Context Link]

 

20. De Buck E, Pauwels NS, Dieltjens T, et al. Is blood of uncomplicated hemochromatosis patients safe and effective for blood transfusion? A systematic review. J Hepatol 2012; 57:1126-1134. [Context Link]

 

21. Cusack L, De Buck E, Compernolle V, Vandekerckhove P. Blood type diets lack supporting evidence: a systematic review. Am J Clin Nutr 2013; 98:99-104. [Context Link]

 

22. Woolf SH, Grol R, Hutchinson A, et al. Clinical guidelines: potential benefits, limitations, and harms of clinical guidelines. BMJ 1999; 318:527-530. [Context Link]

 

23. Graham R, Mancher M. WDGSSEeClinical practice guidelines we can trust. Washington, DC:National Academies Press; 2011. [Context Link]

 

24. Miller J, Petrie J. Development of practice guidelines. Lancet 2000; 355:82-83. [Context Link]

 

25. van der Wees P, Qaseem A, Kaila M, et al. Prospective systematic review registration: perspective from the Guidelines International Network (G-I-N). Syst Rev 2012; 1:3. [Context Link]

 

26. Cruse H, Winiarek M, Marshburn J, et al. Quality and methods of developing practice guidelines. BMC Health Serv Res 2002; 2:1. [Context Link]

 

27. Raine R, Sanderson C, Black N. Developing clinical guidelines: a challenge to current methods. BMJ 2005; 331:631-633. [Context Link]

 

28. Sniderman AD, Furberg CD. Why guideline-making requires reform. JAMA 2009; 301:429-431. [Context Link]

 

29. Oxman AD, Glasziou P, Williams JW Jr. What should clinicians do when faced with conflicting recommendations? BMJ 2008; 337:a2530. [Context Link]

 

30. Shaneyfelt TM, Centor RM. Reassessment of clinical practice guidelines: go gently into that good night. JAMA 2009; 301:868-869. [Context Link]

 

31. Grilli R, Magrini N, Penna A, et al. Practice guidelines developed by specialty societies: the need for a critical appraisal. Lancet 2000; 355:103-106. [Context Link]

 

32. Qaseem A, Forland F, Macbeth F, et al. Guidelines International Network: toward international standards for clinical practice guidelines. Ann Intern Med 2012; 156:525-531. [Context Link]

 

33. Ransohoff DF, Pignone M, Sox HC. How to decide whether a clinical practice guideline is trustworthy. JAMA 2013; 309:139-140. [Context Link]

 

34. Atkins D, Best D, Briss PA, et al. Grading quality of evidence and strength of recommendations. BMJ 2004; 328:1490. [Context Link]

 

35. Giannakakis IA, Haidich AB, Contopoulos-Ioannidis DG, et al. Citation of randomized evidence in support of guidelines of therapeutic and preventive interventions. J Clin Epidemiol 2002; 55:545-555. [Context Link]

 

36. Alonso-Coello P, Irfan A, Sola I, et al. The quality of clinical practice guidelines over the last two decades: a systematic review of guideline appraisal studies. Qual Saf Health Care 2010; 19:e58. [Context Link]

 

37. Van de Velde S, De Buck E, Vandekerckhove P, Volmink J. Evidence-based African first aid guidelines and training materials. PLoS Med 2011; 8:e1001059. [Context Link]

 

38. Dieltjens T, De Buck E, Verstraeten H, et al. Evidence-based recommendations on automated external defibrillator training for children and young people in Flanders-Belgium. Resuscitation 2013; 84:1304-1309. [Context Link]

 

39. Deville WL, Buntinx F, Bouter LM, et al. Conducting systematic reviews of diagnostic studies: didactic guidelines. BMC Med Res Methodol 2002; 2:9. [Context Link]

 

40. Wilczynski NL, Haynes RB. Optimal search strategies for detecting clinically sound prognostic studies in EMBASE: an analytic survey. J Am Med Inform Assoc 2005; 12:481-485. [Context Link]

 

41. Wilczynski NL, Haynes RB. Developing optimal search strategies for detecting clinically sound prognostic studies in MEDLINE: an analytic survey. BMC Med 2004; 2:23. [Context Link]

 

42. Hartling L, Bond K, Harvey K, et al. Developing and testing a tool for the classification of study designs in systematic reviews of interventions and exposures [Internet]. Agency for Healthcare Research and Quality (US), 2010; Dec Report No: 11-EHC007-EF AHRQ Methods for Effective Health Care 2010 December. [Context Link]

 

43. Andrews J, Guyatt G, Oxman AD, et al. GRADE guidelines: 14. Going from evidence to recommendations: the significance and presentation of recommendations. J Clin Epidemiol 2013; 66:719-725. [Context Link]

 

44. Shekelle P, Eccles MP, Grimshaw JM, Woolf SH. When should clinical guidelines be updated? BMJ 2001; 323:155-157. [Context Link]

 

45. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009; 6:e1000097. [Context Link]