Authors

  1. Harker, Julie MRes

Aim: Commissioners of Health Technology Assessments require timely reviews to attain efficacious decisions on healthcare and treatments. In recent years, there has been an emergence of 'rapid reviews' within Health Technology Assessments; however, there is no known published guidance or agreed methodology within recognised systematic review or Health Technology Assessment guidelines. In order to answer the research question 'What is a rapid review and is methodology consistent in rapid reviews of Health Technology Assessments?', a study was undertaken in a sample of rapid review Health Technology Assessments from the Health Technology Assessment database within the Cochrane Library and other specialised Health Technology Assessment databases to investigate similarities and/or differences in rapid review methodology utilised.

 

Method: In a targeted search to obtain a manageable sample of rapid reviews, the Health Technology Assessment database of The Cochrane Library and six international Health Technology Assessment databases were searched to locate rapid review Health Technology Assessments from 2000 onwards. Each rapid review was examined to investigate the individual methodology used for searching, inclusion screening, quality assessment, data extraction and synthesis. Methods of each rapid review were compared to investigate differences and/or similarities in methodologies used, in comparison with recognised methods for systematic reviews.

 

Results: Forty-six full rapid reviews and three extractable summaries of rapid reviews were included. There was a wide diversity of methodology, with some reviews utilising well-established systematic review methods, but many others diversifying in one or more areas, that is searching, inclusion screening, quality assessment, data extraction, synthesis methods, report structure and number of reviewers. There was a significant positive correlation between the number of recommended review methodologies utilised and length of time taken in months.

 

Conclusions: Despite the number of rapid reviews published within Health Technology Assessments over recent years, there is no agreed and tested methodology and it is unclear how rapid reviews differ from systematic reviews. In a sample of Health Technology Assessment rapid reviews from 2000 to 2011, there was a wide diversity of methodology utilised in all aspects of rapid reviews. There is scope for wider research in this area to investigate the diversity of methods in more depth during each stage of the rapid review process, so that eventually recommendations could be made for clear and systematic methods for rapid reviews, thus facilitating equity and credibility of this type of important review methodology.

 

Article Content

Background

Over recent years, there has been demand from commissioners of Health Technology Assessments (HTAs), healthcare guidance and guidelines for reviews that are able to answer the stipulated research question rapidly, efficiently, competently and satisfactorily. While systematic reviews (SRs) remain the methodology of choice when summarising evidence by identifying, selecting, appraising and synthesising research findings in health and medical research,1 they can often be time-consuming using many human and financial resources. There have been several HTA reports published by various international agencies in the last decade, purporting to be a 'rapid review' (RR). However, a literature search by the authors does not elucidate any clear or final definition of what a RR is and how exactly the methodology of a RR differs from a full SR. The term 'RR' does not appear to have one single definition but is framed in the literature as utilising various stipulated time frames between 1 and 6 months.2-5 The word 'rapid' indicates that it will be carried out quickly, although this labelling does not inform us as to exactly which part of the review is intended to be carried out at a faster pace than a full SR. The name could imply the manipulation of agreed SR processes such as quicker searching and searching fewer databases, faster inclusion screening and/or having a narrower remit for inclusion of studies, limiting data extraction or analysing the data by using only selected methods of quantitative or qualitative analysis in order to draw rapid conclusions about a specific research question. Indeed, it seems that any or all of these specifications could be applied to a RR in order to draw fast conclusions about a specific health intervention.

 

In a recent Canadian SR of RRs,2 methods used were examined in detail and the implications of streamlining recognised SR methodology in RRs was discussed; they are defined as being 'literature reviews that use methods to accelerate or streamline traditional SR processes'. While this is a concise definition, it does not include the caveat that RRs may contain diversity in methodology that traditional SRs do not or, at least, should not have. For example RRs may start off with a broad search remit like a SR (e.g. Van Brabandt and Neyt6) or a narrower search methodology (e.g. Legrand et al.7) but go on to synthesise data in a much less defined way and/or use different methodology than a SR in some parameters, for example not utilising quality assessment (QA) or being unclear on inclusion criteria. A further study, coordinated in Australia,3 assessed RRs specifically prepared for international HTA organisations. A survey tool was developed and distributed to various HTA agencies, and data on a broad range of themes related to RRs were collated and analysed either statistically or narratively. A broad definition of a RR was stated as 'any HTA report or SR that has taken between 1 and 6 months to produce which contains the elements of a comprehensive literature search'. However, this definition lacks specificity and again does not really paint a full picture of the range of methodology or realistic timelines that a RR may or may not employ.

 

There are no clear published guidelines on conducting RRs in the Cochrane Handbook8 or the UK Centre for Reviews and Dissemination's (CRD's)9 guidance. CRD9 briefly describes rapid assessment processes taking 'typically 3 months or less', acknowledging that these differ from SRs due to time constraints and associated compromises in processes such as searching for studies. CRD asserts that these studies should be viewed as provisional appraisals rather than full SRs.

 

There is a lack of published studies that have specifically scrutinised and reported methodology utilised in RRs of HTAs in any detail in order to assess how rigorously published recommended SR methodologies (e.g. Higgins and Green8 and Centre for Reviews and Dissemination9) have been adhered to and to explore the diversity of time frames that appear to be evident. These elements may potentially affect the quality of reporting and the recommendations of the RRs, having serious implications for policy and practice.5

 

Aim

In order to gain understanding and insight into the theoretical and operational processes of RRs within HTAs, we formulated the research question 'What is a RR and is methodology consistent in RRs of HTAs?' The specific objective of this study was to systematically investigate methodology by appraising in detail the methods utilised in the RRs for consistency, and in comparison with already established processes used in SRs such as searching, inclusion screening, QA, data extraction and data synthesis,8,9 and to investigate whether any differences in methodologies were related to estimated length of time taken to carry out RRs.

 

Methods

Our aim was a methodological exploration of a sample of RRs of international HTAs retrieved through a scoping search of potential studies over the last decade. All work (searching, defining inclusion criteria, inclusion screening, data extraction, data synthesis and report writing) was carried out by one reviewer/author (JH), with assistance from a second author (JK). Four other colleagues were consulted (and are separately acknowledged) for advice on searching, language translation, statistical analysis and proofreading of the final report. QA of the included studies did not take place as this was not in keeping with the nature of the study, which was designed to obtain all available evidence from the included reviews, without subjective judgements regarding inclusion on the basis of quality of the reviews.

 

Search strategy

The HTA database was searched by the principal author after discussion with a scientific information specialist, via the Cochrane Library (Wiley) on 18 April 2011 and retrieved 45 references. A top-up search of HTA via the UK CRD interface on 18 April 2011 was also conducted to ensure currency. The second search retrieved 25 additional records - 18 of which were duplicates.

 

Search terms were 'rapid review', 'rapid AND review', 'rapid appraisal' or 'rapid assessment' in title and abstract. Search dates were January 2000 to April 2011 inclusive. A small supplementary search was carried out in late April 2011 to find more RRs using the same terms and dates on the international websites/databases stipulated in Figure 1.

  
Figure 1 - Click to enlarge in new window Selected international databases searched (HTA, Health Technology Assessment; NHS QIS, National Health Service, Quality Improvement Scotland).

Inclusion criteria

Health Technology Assessment articles with 'rapid review' in the title and/or abstracts published in the English language between 2000 and 2011 inclusive were included. Foreign language HTAs and articles that were not clearly presented as RRs were excluded, as were articles where a full text or extractable summary could not be located.

 

Data extraction

In order to obtain a thorough insight into the detailed methodological processes used in the RRs, data were extracted into a Microsoft Office Excel 2007 spreadsheet by the researcher (JH). This was custom made for the project and included the following outcomes for each included HTA RR: country of origin, area of research, reported research questions or clinical questions, number of reviewers carrying out each review, number and type of databases searched, reporting of search strategies and search terms utilised, method of inclusion screening and criteria, QA and/or study bias reporting, method of data extraction (if described), method of data synthesis/meta-analysis, reporting or acknowledgement of RR limitations and estimated length of time taken to complete the RR from searches to publication.

 

Statistical analysis and data synthesis

Data were synthesised both narratively and by calculating quantitative descriptive statistics (frequencies and percentages) for each research question. Means and SDs were calculated for both continuous and dichotomous variables using Microsoft Office Excel 2007 or an Internet calculator: http://easycalculation.com/statistics/standard-deviation.php. Data from each question were extracted into comparison tables, so that methods could be aggregated and compared; narrative conclusions were then made about the equity and/or diversity of the various methodologies employed. Finally, correlations were calculated on the full dataset using the Microsoft Office Excel 2007 programme to calculate regression and significance statistics.

 

Results

A search of the Cochrane Library, CRD HTA database and selected international websites of HTA databases was carried out to provide a manageable sample of RRs to analyse in detail. Forty-two relevant titles and abstracts were located in the Cochrane Library (HTA) database; a supplementary search in the CRD HTA database identified 18 duplicates and seven additional relevant titles and abstracts to screen. When the search was widened to include other websites/databases from discrete countries (Fig. 1), 12 more relevant/abstracts were identified. Following screening of titles and abstracts, 58 full papers, two CRD Database of Abstracts of Reviews of Effects (DARE) summaries and one discrete RR project summary were downloaded. Subsequently, 12 papers were excluded, leaving 46 full papers and three project summaries (a total of 49) to be data extracted, all were published between 2000 and 2010 and written in the English language (see Fig. 2).

  
Figure 2 - Click to enlarge in new window Search flow diagram (ASERNIP-S, Australian Safety and Efficacy Register of New Interventional Procedures - Surgical; CRD, Centre for Reviews and Dissemination; HTA, Health Technology Assessment; INAHTA, International Network of Agencies for Health Technology Assessment; KCE, Knowledge Centre; QIS, Quality Improvement Scotland; SMC, Scottish Medicines Consortium).

In order to answer the research question 'What is a RR and is methodology consistent in RRs of HTAs?', a detailed analysis of 11 discrete theoretical and operational processes of the targeted sample was carried out, comparing methods within and between the reviews, including the time taken and any differences as compared with standard SRs.

 

Country of origin of the RRs and area of research

Country of origin was broken down into the following four countries across three different continents: the country with the most HTA RRs was the UK with 55.1% (n = 27), followed by Belgium with 18.4% (n = 9), Australia with 14.3% (n = 7) and Canada with 12.2% (n = 6).

 

All 49 RRs represented different areas of medical and health research. Ninety per cent (n = 44) of the RRs were reviews of health interventions, only 6% (n = 3) were RRs of diagnostic procedures and 4% (n = 2) were from other types of health and social research. The full range of research areas is outlined in Appendix I.

 

Reported research questions or clinical questions

Each review was analysed to see if there was a clear research question or questions - these were sometimes reported as 'clinical questions', and if this was the case then for this analysis they were recorded as research questions. In total, 47% (n = 23) of the RRs did not have any clear research or clinical questions. At the other end of the spectrum, 8% (n = 4) of the included reviews had four or more research questions with the maximum number being six research questions (see Table 1, row 1 for an overview). The distribution of this variable is shown in Figure 3. The mean number of research questions utilised was 1.45 (SD 1.50) with a median of 1 (range 0-6, not normally distributed). When studies with no research question were omitted, of the 26 studies reporting one or more research questions, the mean was 2.85 (SD 1.08) and median was 3 (range 1-6, not normally distributed).

  
Table 1 - Click to enlarge in new window Assessment of methodological processes in the 49 rapid reviews broken down by variables - see legend for key to how each variable was assessed
 
Figure 3 - Click to enlarge in new window Number (%) of rapid reviews (RRs) with clear research question(s) reported.

Twenty-three RRs did not report a clear research question. These RRs were further analysed to assess whether a question(s) could be derived from the review aims and objectives. In 43% of the 23 papers (n = 10), only broad aims were stated, while 30% (n = 7) stated clear aims and objectives from which a research question could be inferred. Thirteen per cent (n = 3) had no clear aim or objective, and for the other 14% (n = 3) of reviews this was not estimatable due to the lack of specific information (Appendix II). Only one paper (i.e. Stordeur et al.32) explicitly stated using the Patients, Interventions, Comparators, Outcomes (PICO) criteria to formulate their research questions.

 

Number of reviewers carrying out each RR

Information was extracted on how many reviewers were reported to have carried out each review. Some RRs gave detail for number of reviewers carrying out specific parts of the review and so where possible, this information was also extracted.

 

The majority (61%, n = 30) of the RRs reported that two reviewers were used to carry out the various processes while also reporting checking of data extraction by at least one other researcher, although the methodology was not always consistent. For example of these 30 reviews, only 40% (n = 12) clearly stated that two reviewers independently appraised and extracted the data at all stages. A further 33% of reviews (n = 16) reported either using just one reviewer for the data extraction process or had unclear reporting on this outcome. Others reported different processes, with varied numbers of reviewers and checking processes (see Fig. 4). Some reviews were reported to be carried out in two sections (efficacy and cost-effectiveness) - with different numbers of reviewers working on the discrete parts of the review (e.g. Meads et al.,53 Shepherd et al.,10 Stordeur et al.,32 Vlayen et al.11 and Vlayen et al.12). In total, 8% (n = 4) of the RRs reported mixed numbers of reviewers working on different sections of the report, with a further 14% (n = 7) of RRs giving either no information or an unclear description of how many reviewers worked on the review. Only two RRs (4%) reported using only one reviewer for all sections of the review, and one review used three reviewers. Five papers (10%) reported four or more (and up to six) reviewers working on various parts of the review; sometimes it was stated that 'external experts' independently assessed review data either during or after completion (e.g. Vlayen et al.12 and Ospina et al.13). The mean number of reviewers per review (based on 44 reviews where possible to calculate) was 1.84 (SD 1.03). Table 1 (row 2) gives an overview of number of reviewers in each RR.

  
Figure 4 - Click to enlarge in new window Number and percentage (%) of reviewers carrying out each rapid review (RR).

Number and type of databases searched

Each RR was analysed to assess (i) number and (ii) type of databases searched. The number of databases searched varied widely between the reviews, with the minimum number being two databases. It was not possible to calculate a specific maximum number due to some reviews failing to describe terms such as 'handsearching' and 'general Internet searches'; however, 43% (n = 21) searched 10 or more sources. Information on the full range of 93 different searched databases can be obtained from the authors. Table 1 (row 3) details this outcome by review and the distribution of number of databases/websites can be viewed in Figure 5.

  
Figure 5 - Click to enlarge in new window Distribution spread of number of databases and/or websites searched in the 49 rapid reviews (RRs).

Only 67% of reviews (n = 33) reported specifically searching the recommended combination (Cochrane,8 Chapter 6 and CRD,9 p. 17) of Medline/Embase/Central and/or the Cochrane Library (Table 1, row 4). A further 13 reviews (27%) reported searching at least two of the specified databases in combination, and the other 6% (n = 3) were unclear as to which databases they searched. The most popular sources searched were the following: Embase (90%), Medline (86%), the Cochrane Library (53%), CRD National Health Service (NHS) Economic Evaluation Database (53%), CRD HTA (49%) and CRD DARE (43%). With regard to CRD searches, 31% of RRs (n = 15) reported searching 'CRD databases'- it is not known whether this alludes to all three of the databases or just one specific CRD database. Likewise with the Cochrane Library, although 53% of RRs (n = 26) reported searching the 'Cochrane Library' or 'Cochrane Database', other reviews reported searching different sections of the Cochrane Library. The mean number of databases searched in the 49 reviews was 9.24 (SD 4.39); HTA databases/websites were not included in this calculation for reasons of practicality (i.e. it was not always clear how many of these were searched/how they were searched).

 

Reporting of search strategies and search terms utilised

Each included RR was assessed (i) to illicit whether a full search strategy was reported and (ii) to monitor the number of search terms used, and whether search terms were specifically tailored to different databases/resources (Table 1, row 5). Of the 49 RRs, 69% (n = 34) reported a full 'search strategy', that is where a list of search dates, databases and search terms utilised were published. However, the remaining 31% (n = 15) of reviews either omitted a search strategy or published incomplete search methods, with a lack of detail and/or clarification on search processes utilised.

 

Search terms were often but not always presented as sets of terms used for either separate or multiple databases (for example some papers used a set of terms for several databases, while others used a different set of terms for each database searched; see Table 1, row 6). Fifty-three % (n = 26) reported multiple and varying search terms according to database; however, 14% (n = 7) reported using the same sets of multiple terms for each database or other source used, and 33% (n = 16) had unclear search terms or no information reported regarding this variable. Figure 6 shows a pie chart distribution of how search terms were utilised.

  
Figure 6 - Click to enlarge in new window Percentage of search term used in the sample of rapid reviews (RRs).

Method of inclusion screening and criteria

Inclusion/exclusion criteria and initial screening were described in the RRs in a number of different ways. For simplicity, three categories were used:

 

1. Clear inclusion and exclusion criteria were described and easily locatable in the paper. Flow diagrams were often used in papers where this information was transparent and easy to detect - 47% (n = 23) of reviews were in this category.

 

2. Broad inclusion and exclusion criteria were described, but specifics such as types of study/trial included were lacking and inclusion/exclusion was discussed in terms of subject area. Occasionally, inclusion criteria were described, but not exclusion criteria, or vice versa; a further 47% (n = 23) of reviews were in this category.

 

3. Description of inclusion/exclusion criteria and screening process for papers was not included; only 6% (n = 3) of RRs were in this category. In addition, 12% (n = 6) of papers clearly stated that separate searches were carried out for efficacy and cost-effectiveness screening, but one-third of these papers had different standards of inclusion screening (i.e. clear/unclear) for the separate sections of the RR.

 

 

Details are reported in Table 1, row 7.

 

QA and/or study bias reporting of included papers

The RRs were analysed to see whether study quality had been assessed and by which methods (Table 1, row 8). The number of different quality/risk of bias checklists used was also assessed. In total, 47% (n = 23) of the RRs clearly reported that they carried out a QA, including the specific methodology (i.e. checklist/source) used. A further 29% (n = 14) of the RRs stated that some form of quality or validity assessment was carried out, but the methodology was unclear. Descriptions of checklists, where available, were often vague and not referenced or not all the studies included in the review were quality assessed. In addition, 24% (n = 12) of the RRs reported that the study quality was not assessed or was simply not reported. A proportion of included reviews (37%, n = 18) stated using more than one checklist, including the Jadad et al.57 checklist; however, all the reviews reporting using this checklist57 were published on or before 2001. Figure 7 is a horizontal bar distribution chart of frequency of QA methods utilised.

  
Figure 7 - Click to enlarge in new window Checklists reported for quality assessment methods used in the rapid reviews (AGREE, Appraisal of Guidelines for Research and Evaluation; BMJ, British Medical Journal; CASP, Critical Appraisal Skills Programme; CEBM, Centre for Evidence Based Medicine; CGR, Cancer Guidance Reports; CRD, Centre for Reviews and Dissemination; INAHTA, International Network of Agencies for Health Technology Assessment; NHMRC, National Health and Medical Research Council; NHS, National Health Service; QA, quality assessment; RR, rapid review. *on or before 2008).

Method of data extraction

As well as recording the number of reviewers carrying out data extraction and checking (as detailed above), the RRs were analysed to see whether authors had described specific methods of data extraction (Table 1, row 9). In total, 6% (n = 3) of the reviews reported separate sections for efficacy and cost-effectiveness and gave different (and inconsistent) numbers of data extractors for each separate section. In addition, 39% (n = 19) of reviews reported using either pre-developed/tested data extraction sheets or a relevant database (e.g. Microsoft Access) for data extraction. A further 16% (n = 8) of reviews used tables for data extraction, but it was unclear how these had been developed. Only 22% of reviews (n = 11) reported a clear data extraction strategy and 31% (n = 15) did not clearly report their methods. Figure 8 shows a horizontal bar chart representation of the various data extraction methods used in the RRs.

  
Figure 8 - Click to enlarge in new window Data extraction methods used in the rapid reviews (DE, data extraction).

Method of data synthesis/meta-analysis

All of the reviews presented some kind of narrative summary or synthesis of the reported results, and 88% (n = 43) used summary data tables either in the main body of the text or as appendices. The majority of papers (84%, n = 41) presented a wholly quantitative analysis, while only one review used a fully qualitative analysis. Four reviews (8%) reported both qualitative and quantitative analyses of data, while three reviews (6%) did not clearly describe their analysis methods. Only 20% (n = 8) of the quantitative reviews (n = 41) reported a meta-analysis, 10% (n = 4) reported pooling data 'where possible' and a further 15% (n = 6) reported a reason for not attempting to pool or meta-analyse the data (Table 1, row 10). In addition to this, 12% of all the reviews (n = 6) reported separate analyses of efficacy and cost-effectiveness data, clearly presenting discrete data for each section.

 

Reporting or acknowledgement of RR limitations

An analysis was made to check whether the RRs contained any acknowledgement of the limitations, given that none of the reviews were full SRs (Table 1, row 11). Of the papers that reported limitations, 45% (n = 22) stated the limitations of studies analysed within the reviews rather than the RR itself, 14% (n = 7) identified limitations in the review methodology and 6% (n = 3) acknowledged limitations in both the review methodology and the methodologies of the included studies. In total, 12% of the reviews (n = 6), all from the same Australian commissioner, contained a broad disclaimer describing the methodological limitations of RRs in general, although these were not specific to the topics in question. However, 18% (n = 9) of the reviews failed to mention any potential limitations in either the review methodology or the methodologies of the included studies. Figure 9 gives a pie chart representation of the spread of how limitations were reported.

  
Figure 9 - Click to enlarge in new window Representation of how limitations of rapid reviews (RRs) were acknowledged in the sample.

Estimated length of time taken to complete the RR

The term 'RR' implies that a review will be carried out at a faster rate than a full SR, but none of the RRs specifically reported this information. In order to estimate the time frame used in the RRs, the time between the last search date and the approximate (or specific) review publication date was used (Table 1, row 12). The majority of the reviews (51%, n = 25) were estimated to have taken 7-12 months from the completion of searching to publication. Only 10% (n = 5) of the reviews were published within 3 months, and 12% of reviews (n = 6) were published within 4-6 months. All of the RRs estimated to have taken over 13 months (18%) were published in the UK, with 6% (n = 3) taking between 13 and 18 months, 10% (n = 5) taking 19-24 months and 2% (n = 1); a RR summary with an estimated publication date of 3.5 years following the start date. It was not possible to estimate a time frame in four reviews (8%) (but two of these were summaries) or to determine the time from completion of any of the reviews to publication time, as this information was not given in any paper. From a total of 45 reviews where data were analysable for this outcome, the mean time taken to complete the review (in months) was 10.42 (SD 7.1). There was one 'outlier' summary14 reporting an estimate of 3.5 years (42 months) from start to finish. When this was removed, the mean was 9.7 months (SD 5.3). Figure 10 shows a bar chart representing these data. In addition, we calculated correlation and regression estimates to determine whether there was any correlation between the number of reviews reporting recommended SR processes (see legend of Table 1 stating how these were assessed)8,9 and time taken to complete the reviews versus those reporting alternative methodologies (as outlined in Table 1). With the above outlier removed,14 there was a positive correlation between the number of recommended review methodologies utilised and length of time taken: adjusted R2 0.193 (SE 1.81), P = 0.001 (see Fig. 11), suggesting that the more recommended methodologies used, the longer the review took to perform. The opposite trend can be seen in Figure 12, showing the number of papers reporting sub-standard methodologies against time taken: adjusted R2 0.065 (SE 1.71), P = 0.04, still significant but with a slightly lesser effect.

  
Figure 10 - Click to enlarge in new window Representation of estimated time frame (search date to publication) of rapid reviews (RRs).
 
Figure 11 - Click to enlarge in new window Correlation and prediction regression line fit plot showing correlations between length of time taken and number of standardised systematic review (SR) reporting methodologies, that is better reporting methodologies (green in
 
Figure 12 - Click to enlarge in new window Correlation and prediction line fit plot showing correlations between length of time taken and number of sub-standard reporting methodologies, that is worse reporting of sub-standard/adapted methodologies (red in

Discussion

This study was a methodological exploration of a selection of RRs carried out as HTAs. A targeted search of the Cochrane Library, CRD HTA database and other specific country databases (Fig. 1) resulted in the inclusion of a total of 46 full papers plus three summaries of HTA RRs (Fig. 2). The primary aim of this study was to elucidate the research question 'What is a RR and is methodology consistent in RRs of HTAs?' Following on from the work of authors such as Ganann et al.2 and Watt et al.3 who have already summarised the lack of consistency and threat to validity of evidence in these types of review, we sought to explore in more depth and detail the methodological processes of a sample of RRs in HTAs. In a previous SR,3 these types of paper were singled out as having policy and practice implications while generally being not well defined and highly variable in their methodology. The results of our study confirmed that viewpoint in that there were multiple inconsistencies in reporting and many of the recommended practices from established authorities in SRs were not followed.

 

Nearly half of the RRs did not present any clear research or clinical questions. The Cochrane Handbook8 suggests that a focussed review begins with a well-framed question and should ideally specify PICO criteria, that is participants, interventions, comparisons and outcomes; reviews can be based on broad questions or be more narrowly defined if there are advantages and disadvantages of each. In a typology of reviews58 where the authors analysed 14 RRs, it was noted that often there was inadequate attention paid to research questions in RRs and that caution should be used as this could result in the RR producing either a precise answer to a wrong question or an inconclusive answer to a question that was not properly conceived. Watt et al.3 stress that RRs should be written in answer to specific questions rather than as a quick alternative to a comprehensive SR.

 

The Cochrane Handbook8 advocates that for best practice, a second author should assess studies for inclusion, determine the 'risk of bias' of included studies, extract study data and check data entry and analyses. Specifically, it is recommended that at least two reviewers should carry out the process of data extraction, which should always be checked for inconsistencies to reduce potential biases and minimise errors. Ganann et al.2 also concluded that accelerating the data extraction process in RRs may lead to missing important relevant information. Many of the analysed RRs often reported using inconsistent numbers of reviewers and unclear methods for this important review process.

 

There were also many inconsistencies between the reviews with respect to literature searching and search strategies; only two-thirds of the reviews reported searching the recommended combination of databases for SRs (i.e. Medline, Embase and Central and/or the Cochrane Library) together (see Cochrane8 and CRD9) - often in combination with other databases and/or websites. Ganann et al.2 suggest that bias may be introduced due to shortened time frames for literature searching and article retrieval. Van de Velde et al.59 also document the differences in search strategy and execution when a RR was compared with a SR. Less than half of the reviews reported clear and transparent inclusion criteria, with an equal number reporting only broad or incompletely explained inclusion criteria.

 

In SRs, the study quality, validity and risk of bias of included papers should be rigorously assessed (see Cochrane8 and CRD9), but less than half of the reviews in our study clearly reported a detailed assessment of study quality. The latest tool for assessing the risk of bias within Cochrane reviews uses a domain-based evaluation, where critical assessments are made separately for different domains of a review,8 and CRD9 describes various domain-based checklists for QA/risk of bias, according to study type - however, it would seem from our sample of HTA RRs that such resources were not always used.

 

Despite the majority of reviews reporting quantitative methodology, only 20% of quantitative reviews reported carrying out meta-analyses with an even smaller proportion reporting pooling 'where possible'. Few reviews reported the reasons for not pooling data. Often the distinction between pooling the data and meta-analysis was unclear. A clear description of these differences is reported by Bravata and Olkin.60

 

It is well documented that shortening the process of SR methodology by changing one or more processes (as in a RR) not only introduces risk of bias but also presents a challenge to the scope of RRs and introduces potential limitations which should be made transparent (2,3,58). Only a small percentage of RRs explicitly acknowledged the limitations of RR methodology processes, but one Australian provider (Australian Safety and Efficacy Register of New Interventional Procedures - Surgical) used a broad generic disclaimer about the specific limitations and possible methodology gaps of RRs. Published sources have reported various timescales and definitions for reporting of RRs from searching to publication, that is 2-6 months,4 3 weeks to 6 months,2 1-6 months3 and up to 6 months5- indicating a shorter time frame than this study would suggest. In a study analysing 156 SRs,61 the quickest time performance was for the final search to occur within 10 weeks of submission to a journal, acceptance of publication within 11 weeks of submission and journal publication within 12 weeks of acceptance. In our study, it was not possible to determine whether there was a delay from completion to publication of any analysed RR due to paucity of information and the majority of studies were assessed as taking between 7 and 12 months from final searches to publication.

 

Interestingly, there was a positive correlation found between the time taken to produce the reviews and the assessed quality of the included methodology, with a tendency for reviews with more robust methodology to take longer to produce. When considering the research question that this study was designed to answer, there is not enough evidence to state any equity between theoretical processes and methods utilised in RRs as there was so much diversity in the sample. Furthermore, from the methodology examined in these reviews, it is not possible to make any firm conclusions about what constitutes a HTA RR, as many different methodology combinations were utilised - and very little consistency was found in this sample. This has various implications for policy and practice, given the importance of these reviews in enabling commissioners and providers of healthcare to make decisions on patient care and treatment, both on efficacy and economics.62

 

A limitation of this study is that it is not possible to glean from the information obtained why there were so many inconsistencies, but it has been suggested that factors like the need for time efficiency or the number of staff and resources available may have a bearing on the way that RR processes are executed.2,5,63,64 In addition, our searching was limited by the size and scope of the resources available for our data exploration in order to ensure that a manageable sample of studies could be analysed. Therefore, the results are a snapshot of the evidence from international RRs of HTAs and should not be generalised to RRs from all sources.

 

However, we are confident that our results show an obvious need for further research into RR methodology with the aim of creating methodological guidelines for authors of RRs (which could supplement already published guidance on SRs) or a tool or model for flexible guidance of methodology processes of a RR. One German study5 proposed a model for processing rapid assessment consisting of a modular system tailored to the demands of the decision maker; modules are obligatory (such as searching) or optional (such as meta-analysis). The model is flexible according to the needs of researchers and decision makers.

 

Other researchers who have carried out various investigative reviews into RRs2,3,58,63 have acknowledged the difficulty in trying to create a 'one size fits all' strategy but concluded that for best practice, RRs should contain clear and transparent description and discussion of methodology utilised and acknowledge any limitations.

 

Conclusion

From the results of our study, it is clear that there were large differences in methodological processes across all domains of the reviews from initial processes of formulating research questions and searching for studies through the data extraction and synthesis of included studies. Methodology was not always clearly reported and transparent, and time taken to complete a RR from searching to publication could potentially go into years rather than months in a minority of cases. However, it was not possible to assess if there had been a time delay from completion of each RR to publication. A literature search of relevant evidence produced limited specific information on development of potential tools or guidelines for carrying out HTA RRs. In light of the number of HTAs carried out internationally on new medical technologies, often with strict timelines, there is vast potential and some urgency for greater clarity in this method of research. This would enable commissioners of healthcare and treatments to be confident that the relevant research processes in RRs have been carried out systematically, clearly and effectively within pragmatic time constraints.

 

Acknowledgements

The authors would like to thank the team at Kleijnen Systematic Reviews Ltd, York, UK, and convey special thanks to the following colleagues for specific assistance in the preparation of this study: Kate Misso (searching advice), Nigel Armstrong (statistical assistance), Robert Wolff (German translation) and Carol Forbes (proofreading and editing).

 

The study was fully funded by Kleijnen Systematic Reviews Ltd - where JH is employed as a systematic reviewer and JK is a director. No external funding was received from any outside agency. There are no conflicts of interest.

 

References

 

1. Khan KS, Kunz R, Kleijnen J, Antes G. Systematic Reviews to Support Evidence-Based Medicine: How to Review and Apply Findings of Healthcare Research, 2nd). edn. London: Hodder Arnold, 2011. [Context Link]

 

2. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci, 2010; 5: 56. [Context Link]

 

3. Watt A, Cameron A, Sturm L. et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care, 2008; 24: 133-9. [Context Link]

 

4. Rapid Evidence-Assessments (Reas). [Internet]. Vancouver, BC: University of British Columbia Health Library wiki, updated 15 June 2011. Accessed July 2011. Available from: http://hlwiki.slais.ubc.ca/index.php/Rapid_evidence-assessments_(REAs). [Context Link]

 

5. Perleth M, Luhmann D, Gibis B, Droste S. Rapid assessments - quick evaluation of medical technology. Gesundheitswesen, 2001; 63(Suppl. 1): S79-84. [cited 4.7.11]. [Context Link]

 

6. Van Brabandt H, Neyt M. Percutaneous heart valve implantation in congenital and degenerative valve disease. A rapid Health Technology Assessment. KCE reports 95 [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2008. Accessed April 2011. 96p. Available from: https://kce.fgov.be/publication/report/percutaneous-heart-valve-implantation-in-. [Context Link]

 

7. Legrand M, Coudron V, Tailleu I. et al. Videoregistratie van endoscopische chirurgische interventies: rapid assessment. KCE reports 101A [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2008. Accessed April 2011. 174p. Available from: https://kce.fgov.be/publication/report/video-registration-of-endoscopic-surgery-. [Context Link]

 

8. Higgins JPT, Green S, eds. Cochrane handbook for systematic reviews of interventions. [Internet]. Version 5.1.0 [updated March 2011; Accessed August 2011. Available from: http://www.cochrane-handbook.org/. [Context Link]

 

9. Centre for Reviews and Dissemination. Systematic Reviews: CRD's Guidance for Undertaking Reviews in Health Care. [Internet]. York: University of York, 2009. Accessed August 2011. Available from: http://www.york.ac.uk/inst/crd/SysRev/!SSL!/WebHelp/SysRev3.htm. [Context Link]

 

10. Shepherd J, Waugh N, Hewitson P. Combination therapy (interferon alfa and ribavirin) in the treatment of chronic hepatitis C: a rapid and systematic review. Health Technol Assess, 2000; 4: 1-67. [Context Link]

 

11. Vlayen J, Camberlin C, Paulus D, Ramaekers D. Rapid assessment van nieuwe wervelzuil technologieen: totale discusprothese en vertebro/ballon kyfoplastie. KCE reports 39A [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2006. Accessed April 2011. 64p. Available from: https://kce.fgov.be/publication/report/rapid-assessment-of-emerging-spine-techno. [Context Link]

 

12. Vlayen J, Camberlin C, Ramaekers D. Vacuumgeassisteerde wondbehandeling: een rapid assessment. KCE reports 61A [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2007. Accessed April 2011. 86p. Available from: https://kce.fgov.be/publication/report/negative-pressure-wound-therapy-a-rapid-a. [Context Link]

 

13. Ospina M, Harstall C, Dennett L. Sexual Exploitation of Children and Youth over the Internet: Information Paper. [Internet]. Alberta: Institute of Health Economics (IHE), 2010. Accessed April 2011. 84p. Available from: http://www.ihe.ca/documents/Online%20Sexual%20Exploitation.pdf. [Context Link]

 

14. Clarke M. Rapid systematic review of the impact of participation in research. Ongoing project. [Internet], 2008. Accessed April 2011. Available from: http://www.hta.ac.uk/1787. [Context Link]

 

15. Branas P, Jordan R, Fry-Smith A, Burls A, Hyde C. Treatments for fatigue in multiple sclerosis: a rapid and systematic review. Health Technol Assess, 2000; 4: 1-61.

 

16. Bridle C, Palmer S, Bagnall AM. et al. A rapid and systematic review and economic evaluation of the clinical and cost-effectiveness of newer drugs for treatment of mania associated with bipolar affective disorder. Health Technol Assess, 2004; 8: iii-iv, 1-187.

 

17. Chilcott J, Wight J, Lloyd Jones M, Tappenden P. The clinical effectiveness and cost-effectiveness of pioglitazone for type 2 diabetes mellitus: a rapid and systematic review. [Internet]. Health Technol Assess, 2001; 5: 1-61. [cited 12.4.11].

 

18. Clegg A, Bryant J, Nicholson T. et al. Clinical and cost-effectiveness of donepezil, rivastigmine and galantamine for Alzheimer's disease: a rapid and systematic review. Health Technol Assess, 2001; 5: 1-137.

 

19. Clegg A, Scott DA, Sidhu M, Hewitson P, Waugh N. A rapid and systematic review of the clinical effectiveness and cost-effectiveness of paclitaxel, docetaxel, gemcitabine and vinorelbine in non-small-cell lung cancer. Health Technol Assess, 2001; 5: 1-195.

 

20. De Laet C, Obyn C, Ramaekers D, Van De Sande S, Neyt M. Hyperbaric oxygen therapy: a rapid assessment. KCE reports 74C [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2008. Accessed April 2011. 130p. Available from: https://kce.fgov.be/publication/report/hyperbaric-oxygen-therapy-a-rapid-assessm.

 

21. Dinnes J, Cave C, Huang S, Major K, Milne R. The effectiveness and cost-effectiveness of temozolomide for the treatment of recurrent malignant glioma: a rapid and systematic review. Health Technol Assess, 2001; 5: 1-73.

 

22. Forbes C, Shirran L, Bagnall AM, Duffy S, ter Riet G. A rapid and systematic review of the clinical effectiveness and cost-effectiveness of topotecan for ovarian cancer. Health Technol Assess, 2001; 5: 1-110.

 

23. Centre for Reviews and Dissemination (CRD). NHS Economic Evaluation Database (NHS EED) critical abstract of Foxcroft DR, Milne R. Orlistat for the treatment of obesity: rapid review and cost-effectiveness model. Obes Rev, 2000; 1: 121-6.

 

24. Hill R, Bagust A, Bakhai A. et al. Coronary artery stents: a rapid systematic review and economic evaluation. Health Technol Assess, 2004; 8: iii-iv, 1-242.

 

25. Jobanputra P, Parry D, Fry-Smith A, Burls A. Effectiveness of autologous chondrocyte transplantation for hyaline cartilage defects in knees: a rapid and systematic review. Health Technol Assess, 2001; 5: 1-57.

 

26. Lewis R, Whiting P, ter Riet G, O'Meara S, Glanville J. A rapid and systematic review of the clinical effectiveness and cost-effectiveness of debriding agents in treating surgical wounds healing by secondary intention. Health Technol Assess, 2001; 5: 1-131.

 

27. Lloyd Jones M, Hummel S, Bansback N, Orr B, Seymour M. A rapid and systematic review of the evidence for the clinical effectiveness and cost-effectiveness of irinotecan, oxaliplatin and raltitrexed for the treatment of advanced colorectal cancer. Health Technol Assess, 2001; 5: 1-128.

 

28. Centre for Reviews and Dissemination (CRD). Database of Abstracts of Reviews of Effects (DARE) critical abstract of Nicholson T, Milne R, Stein K. Dalteparin and enoxaparin for unstable angina and non-Q-wave myocardial infarction: update. Southampton: Wessex Institute for Health Research and Development. Development and Evaluation Committee Report 108, 2000.

 

29. O'Meara S, Riemsma R, Shirran L, Mather L, ter Riet G. A rapid and systematic review of the clinical effectiveness and cost-effectiveness of orlistat in the management of obesity. Health Technol Assess, 2001; 5: 1-81.

 

30. Song F, O'Meara S, Wilson P, Golder S, Kleijnen J. The effectiveness and cost-effectiveness of prophylactic removal of wisdom teeth. Health Technol Assess, 2000; 4: 1-55.

 

31. Ward S, Morris E, Bansback N. et al. A rapid and systematic review of the clinical effectiveness and cost-effectiveness of gemcitabine for the treatment of pancreatic cancer. Health Technol Assess, 2001; 5: 1-70.

 

32. Stordeur S, Gerkens S, Roberfroid D. Interspinous implants and pedicle screws for dynamic stabilization of lumbar spine: rapid assessment. KCE reports 116C [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2009. Accessed April 2011. 150p. Available from: https://kce.fgov.be/publication/report/interspinous-implants-and-pedicle-screws-. [Context Link]

 

33. Australian Safety and Efficacy Register of New Interventional Procedures - Surgical. Rapid review: clinical treatments for wrist ganglia. ASERNIP-S Report No. 63 [Internet]. East Melbourne, Australia: ASERNIPS; Australian Government Department of Health and Ageing; Royal Australasian College of Surgeons, 2008. Accessed April 2011. 41p. Available from: http://www.surgeons.org/media/6596/Clinical_treatment_for_wrist_ganglia.pdf.

 

34. Australian Safety and Efficacy Register of New Interventional Procedures - Surgical. Rapid review: diagnostic arthroscopy for conditions of the knee. ASERNIP-S Report No. 64 [Internet]. East Melbourne, Australia: ASERNIPS; Australian Government Department of Health and Ageing; Royal Australasian College of Surgeons, 2008. Accessed April 2011. 35p. Available from: http://www.surgeons.org/racs/research-and-audit/asernip-s/asernip-s-publications.

 

35. Australian Safety and Efficacy Register of New Interventional Procedures - Surgical. Rapid review: male non-therapeutic circumcision. ASERNIP-S Report No. 65 [Internet]. East Melbourne, Australia: ASERNIPS; Australian Government Department of Health and Ageing; Royal Australasian College of Surgeons, 2008. Accessed April 2011. 73p. Available from: http://www.surgeons.org/racs/research-and-audit/asernip-s/asernip-s-publications.

 

36. Australian Safety and Efficacy Register of New Interventional Procedures - Surgical. Rapid review: treatments for varicose veins. ASERNIP-S Report No. 66 [Internet]. East Melbourne, Australia: ASERNIPS; Australian Government Department of Health and Ageing; Royal Australasian College of Surgeons, 2008. Accessed April 2011. 51p. Available from: http://www.surgeons.org/racs/research-and-audit/asernip-s/asernip-s-publications.

 

37. Australian Safety and Efficacy Register of New Interventional Procedures - Surgical. Rapid review: upper airway surgery for the treatment of adult obstructive sleep apnoea. ASERNIP-S Report No. 67 [Internet]. East Melbourne, Australia: ASERNIPS; Australian Government Department of Health and Ageing; Royal Australasian College of Surgeons, 2008. Accessed April 2011. 57p. Available from: http://www.surgeons.org/racs/research-and-audit/asernip-s/asernip-s-publications.

 

38. Tsakonas E, Moulton K, Spry C. FDG-PET to assess infections: a review of the evidence. Health technology assessment rapid review. [Internet]. Ottawa, Canada: Canadian Agency for Drugs and Technologies in Health, 2008. Accessed May 2011. 23p. Available from: http://www.cadth.ca/media/pdf/I3016_FDG-PET_Assess_Infections_htis-3_e.pdf.

 

39. Brown A, Coyle D, Cimon K, Farrah K. Hip protectors in long-term care: a clinical and cost-effectiveness review and primary economic evaluation. Health technology assessment rapid review. [Internet]. Ottawa, Canada: Canadian Agency for Drugs and Technologies in Health, 2008. Accessed May 2011. 28p. Available from: http://www.cadth.ca/media/pdf/I3015_Hip_Protectors_Long_Term_Care_tr_e.pdf.

 

40. Murphy G, Prichett-Pejic W, Severn M. Non-emergency telecardiology consultation services: rapid review of clinical and cost outcomes. Technology Report No. 134. [Internet]. Ottawa, Canada: Canadian Agency for Drugs and Technologies in Health, 2010. Accessed April 2011. 33p. Available from: http://www.cadth.ca/media/pdf/H0501_Telecardiology_Report_e.pdf.

 

41. Ndegwa S, Prichett-Pejic W, McGill S, Murphy G, Prichett-Pejic W, Severn M. Teledermatology services: rapid review of diagnostic, clinical management, and economic outcomes. Technology Report No. 135. [Internet]. Ottawa, Canada: Canadian Agency for Drugs and Technologies in Health, 2010. Accessed April 2011. 41p. Available from: http://www.cadth.ca/media/pdf/H0502_Teledermatology_Report_e.pdf.

 

42. Clegg A, Bryant J, Milne R. Disease-modifying drugs for multiple sclerosis: a rapid and systematic review. [Internet]. Health Technol Assess, 2000; 4: i-iv, 1-101. [cited 27.4.11].

 

43. De Broe S, Christopher F, Waugh N. The role of specialist nurses in multiple sclerosis: a rapid and systematic review. Health Technol Assess, 2001; 5: 1-47.

 

44. Hulstaert F, Thiry N, Eyssen M, Vrijens F. Pharmaceutical and non-pharmaceutical interventions for Alzheimer's disease: a rapid assessment. KCE reports 111C [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2009. Accessed April 2011. 112p. Available from: https://kce.fgov.be/publication/report/pharmaceutical-and-non-pharmaceutical-int.

 

45. Karnon J, Peters J, Platt J, Chilcott J, McGoogan E, Brewer N. Liquid-based cytology in cervical screening: an updated rapid and systematic review and economic analysis. Health Technol Assess, 2004; 8: iii, 1-78.

 

46. McDonagh MS, Bachmann LM, Golder S, Kleijnen J, ter Riet G. A rapid and systematic review of the clinical effectiveness and cost-effectiveness of glycoprotein IIb/IIIa antagonists in the medical management of unstable angina. Health Technol Assess, 2000; 4: 1-95.

 

47. Obyn C, Mambourg F. Rapid assessment van enkele nieuwe behandelingen voor prostaatkanker en goedaardige prostaathypertrofie. KCE reports 89A [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2008. Accessed April 2011. 88p. Available from: https://kce.fgov.be/publication/report/rapid-assessment-of-a-selection-of-new-tr.

 

48. Endovascular Repair of Abdominal Aortic Aneurysms in Low Surgical Risk Patients: rapid review. Toronto, ON: Medical Advisory Secretariat; January, 2010. 14 pages.

 

49. Parkes J, Bryant J, Milne R. Implantable cardioverter defibrillators: arrhythmias. A rapid and systematic review. Health Technol Assess, 2000; 4: 1-69.

 

50. Payne N, Chilcott J, McGoogan E. Liquid-based cytology in cervical screening: a rapid and systematic review. Health Technol Assess, 2000; 4: 1-73.

 

51. Stewart A, Sandercock J, Bryan S. et al. The clinical effectiveness and cost-effectiveness of riluzole for motor neurone disease: a rapid and systematic review. Health Technol Assess, 2001; 5: 1-97.

 

52. Van Brabandt H, Neyt M. Endobronchial valves in the treatment of severe pulmonary emphysema: a rapid health technology assessment. KCE reports 114C [Internet]. Brussels: Belgian Health Care Knowledge Centre (KCE), 2009. Accessed April 2011. 62p. Available from: https://kce.fgov.be/publication/report/endobronchial-valves-in-the-treatment-of-.

 

53. Meads C, Cummins C, Jolly K, Stevens A, Burls A, Hyde C. Coronary artery stents in the treatment of ischaemic heart disease: a rapid and systematic review. Health Technol Assess, 2000; 4: 1-153. [Context Link]

 

54. Australian Safety and Efficacy Register of New Interventional Procedures - Surgical. Brief review: fast-track surgery and enhanced recovery after surgery (ERAS) programs. ASERNIP-S Report No. 74 [Internet]. East Melbourne, Australia: ASERNIPS; Royal Australasian College of Surgeons, 2009. Accessed April 2011. 57p. Available from: http://www.surgeons.org/racs/research-and-audit/asernip-s/asernip-s-publications.

 

55. Australian Safety and Efficacy Register of New Interventional Procedures - Surgical. Rapid review: robotic-assisted surgery for urological, cardiac and gynaecological procedures. ASERNIP-S Report No. 75 [Internet]. East Melbourne, Australia: ASERNIPS; Royal Australasian College of Surgeons, 2009. Accessed April 2011. 131p. Available from: http://www.surgeons.org/racs/research-and-audit/asernip-s/asernip-s-publications.

 

56. Lister-Sharp D, McDonagh MS, Khan KS, Kleijnen J. A rapid and systematic review of the effectiveness and cost-effectiveness of the taxanes used in the treatment of advanced breast and ovarian cancer. Health Technol Assess, 2000; 4: 1-113.

 

57. Jadad AR, Moore RA, Carroll D. et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials, 1996; 17: 1-12. [Context Link]

 

58. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J, 2009; 26: 91-108. [Context Link]

 

59. Van de Velde S, De Buck E, Dieltjens T, Aertgeerts B. Medicinal use of potato-derived products: conclusions of a rapid versus full systematic review. Phytother Res, 2011; 25: 787-8. [Context Link]

 

60. Bravata DM, Olkin I. Simple pooling versus combining in meta-analysis. Eval Health Prof, 2001; 24: 218-30. [Context Link]

 

61. Sampson M, Shojania KG, Garritty C, Horsley T, Ocampo M, Moher D. Systematic reviews can be produced and published faster. J Clin Epidemiol, 2008; 61: 531-6. [Context Link]

 

62. Hailey D, Corabian P, Harstall C, Schneider W. The use and impact of rapid health technology assessments. Int J Technol Assess Health Care, 2000; 16: 651-6. [Context Link]

 

63. Watt A, Cameron A, Sturm L. et al. Rapid versus full systematic reviews: validity in clinical practice? ANZ J Surg, 2008; 78: 1037-40. [Context Link]

 

64. Watt A. Executive summary: rapid versus full systematic reviews: an inventory of current methods and practice in health technology assessment. ASERNIP-S Report No. 60 [Internet]. East Melbourne, Australia: ASERNIPS; Australian Government Department of Health and Ageing; Royal Australasian College of Surgeons, 2007. Accessed April 2011. 3p. Available from: http://www.surgeons.org/media/16323/Rapidvsfull2007_executivesummary.pdf. [Context Link]

Appendix I

 

Referenced table of included rapid reviews - country of origin and areas of research (in alphabetical order of author) [Context Link]

Appendix II

 

Studies with no research questions: where clear aims and objectives stated and/or a research question inferred? [Context Link]

 

Key words: Cochrane Library; health technology assessment; methodology; rapid review; timeline plot