Introduction
Qualitative evidence is of increasing importance in health services policy, planning and delivery. It can play a significant role in understanding how individuals and communities perceive health, manage health and make decisions related to health service usage.1 As with quantitative research, results from a single study only should not be used to guide practice.2 To develop recommendations for evidence-based healthcare practice, pooled data, rather than the findings of single studies, is necessary, and thus the findings of qualitative research should be synthesized.3 As such, there is increasing interest in the potential of qualitative evidence synthesis to inform complex decision-making processes in policy and practice.3,4
A number of different methods have been proposed for the pooling of qualitative findings.5-7 These include thematic synthesis, narrative synthesis, realist synthesis, content analysis, meta-ethnography and meta-aggregation.5-7 Meta-ethnography and meta-aggregation are two common approaches to synthesis.
The meta-aggregative approach (as advocated by the Joanna Briggs Institute [JBI]) was developed in the early 2000 s by an expert group of international qualitative researchers.1-3 The group concluded that an approach congruent with systematic review methods could be developed that would respect and incorporate the philosophical traditions of the critical and interpretive paradigms, but also promote qualitative concepts related to dependability, credibility and transferability, in other words, a valid and reliable approach to systematic reviews that would result in an auditable decision trail. As such, a comprehensive systematic search is required in all meta-aggregative reviews. There now exists clear guidance regarding the exact processes a qualitative systematic reviewer should follow when using the meta-aggregative approach.3
Meta-aggregation, similar to most methods of qualitative research synthesis,7 is a robust approach able to deal with heterogeneity or differences across studies. This is largely because meta-aggregation focuses on synthesizing study findings (the author's analytical interpretation of study data), not study data itself (such as the empirical data collected). Therefore, as long as two or more studies focus on the same phenomena of interest, their findings can be pooled, regardless of the study methodology (e.g. phenomenology, ethnography or grounded theory) or method used. This is a critical assumption of meta-aggregation. Not all approaches agree with synthesizing across different traditions and methodologies.8,9 The original conception of meta-ethnography by Noblit and Hare adviced only synthesizing across studies that have used the same method.9 However, reviewers using approaches that do synthesize across traditions, such as in meta-aggregation, consider the combining of data from multiple theoretical and methodological traditions a strength.8
Although critical appraisal is not a necessary stage in some approaches to qualitative synthesis (in meta-ethnography, for example, the practice remains contentious),10 it is required in all meta-aggregative reviews. Garratt and Hodkinson argue that it is both illogical and pointless to attempt to predetermine a definitive set of criteria against which all qualitative research should be judged.11 Nonetheless, in recent years, the number of critical appraisal and quality assessment tools have increased rapidly. There now exists a general acceptance of the need for high quality qualitative research, and for some sort of appraisal of studies to assess methodological limitations during the review process. However, there is still much debate regarding what criteria or checklist to use to evaluate qualitative research, whether studies should be excluded following appraisal, and whether cut-off points or sum scales should be used.3,4,12
Meta-aggregation aggregates findings of included studies. It requires reviewers to identify and extract the findings from studies included in the review, to categorize these study findings and to aggregate these categories to develop synthesized findings.3 This approach essentially mirrors the processes of a quantitative review whilst holding to the traditions and requirements of qualitative research. The essential characteristic of a meta-aggregative review is that the reviewer avoids re-interpretation of included studies, but instead presents the findings of the included studies as intended by the original authors. Meta-aggregation is based on a clear protocol that defines the question and methods for answering it through data retrieved. A comprehensive search strategy is required as is critical appraisal using a standardized critical appraisal instrument/s. Data extraction involves extracting findings, in addition to data that gives rise to findings, using a data extraction instrument. Synthesis involves the aggregation of findings into categories, and of the categories into synthesized findings that inform practice or policy. A standardized visual representation is used in meta-aggregation in order to present the findings, categories and synthesized findings.
Extracting findings is both the second phase of data extraction and the first step in data synthesis. For qualitative evidence the units of extraction in this process are specific findings (and illustrations from the text that demonstrate the origins of the findings). In meta-aggregation a finding is defined as: "a verbatim extract of the author's analytic interpretation of the results or data".3(P.183) The "data" may be in the form of a theme, metaphor or rich descriptions.
Meta-aggregation requires reviewers to also extract an illustrative excerpt that the researcher presents in support of that particular finding. Findings that do not have a link back to the research participants may not be considered as demonstrably credible compared to findings where the author's analytic interpretation can be verified by the research participants. From the JBI perspective, each finding that is extracted will, where possible, be supported by an illustration that is a verbatim extraction of the words of a participant from that published piece of research. Where this is not possible, the illustration may be either a field-work observation or "other supporting data" (e.g. photo, opinions, newspaper article, painting, mask, object, artifact, etc.). It is only necessary to extract one such supporting illustration. The supporting illustration must always be a direct extraction of the words used by the researcher to illustrate the finding.
The JBI meta-aggregative approach is an important and impactful approach to qualitative research synthesis because it moves beyond theory to produce statements in the form of "lines of action" which then lead to recommendations for policy and practice. The final stage in the meta-aggregative process is to develop a meta-synthesis, a set of synthesized findings that draw some conclusions of use to practice. By contrast, many other qualitative synthesis methods only suggest implications for action that can be drawn or inferred from the synthesis exercise. A synthesized finding, as defined by the JBI, is an overarching description of a group of categorized findings that allow for the generation of recommendations for practice.3 Theoretically, this pragmatic approach allows for the delivery of readily usable synthesized findings, based on the voices of relevant stakeholders, to inform decision-making at the clinical or policy level.3 This is often produced in the form of an "if-then" statement or in a more indicatory form. The synthesized findings produced using meta-aggregative methods are practice theory statements grounded in the data. The result is a summary of the evidence in terms of its implications for practice.
Since the early 2000 s, formal methodological guidance for conducting meta-aggregative reviews has existed. In 2012, a framework for reporting the synthesis of qualitative research was developed, the Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) statement.13 The ENTREQ statement is an internationally accepted reporting standard for qualitative research synthesis. Despite the guidance available, to our knowledge, no research has yet addressed to what extent published meta-aggregative reviews conform to the methodological guidance and to the ENTREQ statement. A recent review of meta-ethnographic studies found that despite the availability of clear guidance, many reviewers had used and applied this method inappropriately.14 If meta-aggregative reviews fail to adhere to the available guidance, this could impact on the transferability and usefulness of any recommendations for policy and practice generated by these reviews. If this guidance is not being followed, further research could help to determine why this is the case and how compliance with the guidance may be facilitated. Therefore, the purpose of this methodological systematic review is to determine the extent to which published meta-aggregative reviews conform to both the available methodological guidance and the ENTREQ statement.
Inclusion criteria
Types of authors
Systematic reviews conducted by any author teams will be considered for inclusion.
Types of studies
This methodological review will consider reviews that state they have used a meta-aggregative approach, or that they are a JBI qualitative review, and have been published in the JBI Database of Systematic Reviews and Implementation Reports (JBISRIR).
Systematic reviews that include more than one evidence type (e.g. an effectiveness review with a qualitative or meta-aggregative component) will not be eligible for inclusion. This is warranted as the JBI methodological guidance for meta-aggregative reviews and the ENTREQ statement are not designed to address the conduct and reporting, respectively, of comprehensive or mixed methods systematic reviews that include a meta-aggregative component.
Systematic reviews published since 2015 will be included in this review, as the most recent JBI Reviewer's Manual and guidance for qualitative meta-aggregative reviews were published in 2014.15
Types of data
The specific data of interest for this review are the steps and/or processes related to the design, conduct and reporting of JBI qualitative systematic reviews using meta-aggregation, particularly in terms of compliance with JBI and ENTREQ reporting criteria and guidance. The specific type of data is described in further detail in the data extraction section.
Search strategy
This review will search the JBISRIR since 2015 to identify published reviews following the meta-aggregative approach. Key terms for searching will include but not be limited to the terms "qualitative", "meta-synthesis", "meta-aggregation" or "Qualitative Assessment and Review Instrument (QARI)". We are aware that researchers have published systematic reviews claiming to have followed the JBI meta-aggregative approach in other journals, however, we are only interested in formal JBI reviews in this project.
Methods
Study selection
Following the search of the JBISRIR, titles and abstracts of the citations will be imported into Endnote (Clarivate Analytics, PA, USA). All citations will be screened by title and abstract to determine potential eligibility by two independent reviewers. If reviewers are unable to determine eligibility from title and abstract alone, the full text will be retrieved and reviewed. Discrepant decisions will be discussed between the two reviewers and if needed a third reviewer will be used to arbitrate the decision.
All potential citations that meet the screening process will be retrieved in full text and another process of screening will be performed by two reviewers to confirm final inclusion in the review. If studies are excluded at the full text stage, a reason will be provided for their exclusion. Discrepant decisions will be discussed between the two reviewers and if needed a third reviewer will be used to arbitrate the decision.
Data extraction
Data extraction will be conducted independently by two reviewers. Each reviewer will extract data using a standardized data extraction tool. The data extraction tool will be designed in an Excel spreadsheet. Each reviewer will extract data directly into the spreadsheet.
A pilot of the data extraction tool will be conducted prior to formal data extraction. A subset of three studies will be selected for the pilot extraction and data will be extracted independently by the two reviewers. Upon completion, the two reviewers will discuss the tool and make any changes if required. Discrepant decisions will be discussed between the two reviewers and if needed a third reviewer will be used to arbitrate the decision. The fields of data that will be extracted are shown in Appendix I.
Data synthesis
Each study will be assessed for compliance to the ENTREQ statement and JBI guidance to meta-aggregation.
A "yes" response will indicate compliance with the guidance. A "no" response will indicate the guidance has not been met or followed. Full compliance will be determined when a single study receives all "yes" responses. Partial compliance will be determined when a single study receives a mixture of both yes and no responses. Zero compliance to the guidance will be determined when a single study receives all "no" responses.
Descriptive statistics will be used to analyze the data. Where appropriate, frequency distributions and percentages distributions will be presented. Data will be presented in tables and graphs and a narrative provided to describe the findings.
Appendix I: Data extraction tool
JBI methods
1. To measure compliance with JBI guidance, the below items will be assessed as either met (yes), not met (no), partially met (partially), or unclear (unclear). Was there reference to a protocol?Determined by an in-text reference in the background or a citation in the reference list.
2. (a) Were the population, phenomena of interest and context presented for the review inclusion criteria?(b) If any are missing, list what was missed.
3. Were multiple types of primary qualitative research designs included (or considered for inclusion)?
4. (a) Were two or more databases searched?(b) List the databases searched.
5. (a) Were gray literature resources searched?(b) List which gray literature sources were searched.
6. Was study screening/selection performed by two reviewers?
7. Was there accurate reporting of exclusion following full-text screening?Check to see if reasons were provided for exclusion at full-text screening.
8. (a) Was critical appraisal conducted?(b) Identify if a different tool than the JBI Critical Appraisal Checklist for Qualitative Research or QARI checklist was used.
9. Did two or more reviewers conduct critical appraisal?
10. Was it clear how the results of critical appraisal were considered in the review?Make a judgement based on the answer to the following four sub-questions:(a) Were studies excluded after critical appraisal?(b) If studies were excluded, was there justification for this exclusion?(c) If studies were excluded, describe how this decision was made (ad hoc or specified in the protocol).(d) Did the results of critical appraisal impact the analysis or interpretation of the results?
11. Did two or more reviewers perform data extraction?
12. Were findings extracted along with illustrations?
13. Were findings assigned a level of credibility?
14. Were findings extracted verbatim?To determine this, check the findings from one paper included in the review (the paper will be the first in terms of alphabetical order by surname in the included studies; if all findings from that one paper were extracted verbatim this can be judged as "yes").
15. Were illustrations for findings extracted verbatim?To determine this, check the findings from one paper included in the review (the paper will be the first in terms of alphabetical order by surname in the included studies; if all illustrations from that one paper were extracted verbatim this can be judged as "yes").
16. Were categories formed by grouping together findings based on their similarity in meaning?Check the first category in the first presented meta-aggregative flowchart and make a judgement based on the presented findings.
17. Were synthesized findings:(a) Worded as indicatory statements?Check that they are worded as "if-then" statements or if terms such as "may", "will" were used.(b) Worded as recommendations (this should not be the case)?
18. Was a schematic of the synthesis process provided?This should be in the form of a meta-aggregative flowchart or table with three columns.
19. Was JBI SUMARI or the JBI Critical Appraisal Checklist for Qualitative Research or QARI used for the systematic review process?Check that the authors explicitly make mention of using JBI software for their review, or that it is clear from the report that SUMARI or the JBI Critical Appraisal Checklist for Qualitative Research or QARI was used (i.e. the meta-aggregative flowchart is a direct export from JBI SUMARI or the JBI Critical Appraisal Checklist for Qualitative Research or QARI).
20. (a) Were recommendations for practice provided?(b) Any extra comments regarding the recommendations (does not require an answer; this is just an opportunity to note any interesting practices seen in this section).
21. Were recommendations:(a) Graded with a grade of recommendation?(b) Provided with a level of evidence (this should not be the case)?
22. Was a ConQual Summary of Findings table provided?
ENTREQ
To measure compliance with the ENTREQ statement, the 21 items of the ENTREQ statement will be assessed as either met (yes), not met (no), partially met (partially), or unclear (unclear).
1. Aim: Has the research question been stated?
2. Synthesis methodology: Are the following described?(a) Synthesis methodology or theoretical framework.(b) Rationale for this choice.
3. Approach to searching:(a) Was the search pre-planned?(b) Was the search described as either comprehensive or iterative?(c) Describe whether the search was comprehensive or iterative.
4. Inclusion criteria: Were the inclusion/exclusion criteria specified (e.g. in terms of population, language, year limits, type of publication, study type)?
5. Data sources:(a) Were the information sources used described?(b) Was information on when the searches were conducted given?(c) Was the rationale for using the data sources provided?
6. Electronic search strategy: Was the literature search and key terms used described?
7. Study screening methods: Were the methods for screening studies described?
8. Study characteristics: Were the characteristics of the included studies presented?
9. Study selection results: Were the study selection results described (i.e. including number of studies screened and reasons for study exclusion)?
10. Rationale for appraisal: Was the rationale for appraisal included?
11. Appraisal items: Was the tool(s) used for appraisal presented?
12. Appraisal process: Was there information regarding whether the appraisal was conducted independently by more than one reviewer and if consensus was required?
13. Appraisal results:(a) Were the results of the quality assessment presented?(b) Was the rationale for excluding studies presented?
14. Data extraction: Was the data extraction process described including an indication of what data was extracted from what section of the included studies?
15. Software: Was the software used to assist analysis (if any) mentioned?
16. Number and identity of reviewers: Were the following identified?(a) Number of reviewers.(b) Identity of reviewers identified for the reviewers who were involved in extraction and synthesis.
17. Coding: Was the process for coding of data described?
18. Study comparison: Was the process of how comparisons were made within and across studies described?
19. Derivation of themes: Was the process for derivation of themes described?
20. Quotations:(a) Were quotations from the primary studies to illustrate themes/constructs provided?(b) Was it identified whether the quotations were participant quotations or the author's interpretation (check the first three quotations included in the results of the review to determine whether they have been identified as being from the author or participants of the primary study; if all three have been identified, answer "yes")?
21. Synthesis output: Were rich, compelling and useful results that go beyond a summary of the primary studies presented?
References