Authors

  1. Doi, Suhail A. R. FRCP, PhD

Article Content

This issue of the journal is devoted to methodology, and the vast majority of the articles deal with the methodology of research synthesis and the evolution of such methods during the recent period. Even so, conventional methods remain at the forefront of research synthesis output today. Specifically, the methodology of meta-analysis remains dominated by application of the random-effects model.1 There are two key issues that need to be addressed when this model is applied. The first is its inappropriate use when meta-analysis is not appropriate. The second is the question around whether the model is indeed fit for purpose when meta-analysis is appropriate.

 

Regarding the first problem, meta-analysis is only appropriate when we are trying to make an inference about an unknown true effect. In other words, regardless of the extent of heterogeneity across studies, we still believe that all these studies are attempting to measure the same effect, even though with varying success. The varying success in estimating this is then a consequence of systematic and random error. When it is not possible to accept that the studies are measuring the same effect, then meta-analysis is inappropriate.2 Unfortunately, this has not stopped researchers using meta-analysis for such studies and a key example is studies that aim to estimate burden of disease. Indeed, studies that have looked at regional burden of disease may generate point estimates from diverse populations using fixed- or random-effects models,3,4 however, this would obviously be wrong as the aim here is population standardization as opposed to estimating the unknown true underlying effect. Thus the inverse variance weights based on study size have no real implication. For example, a researcher might want to estimate the burden of rotavirus infection in sub-Saharan Africa. If the estimates derived from the various countries in this region are then subjected to meta-analysis to obtain a pooled estimate, this would be incorrect because it is no longer a meta-analytic problem but one of population standardization. In other words, we need a population-weighted average and not a study inverse variance-weighted average.5

 

The second problem is the controversy surrounding random-effects models.6 A major controversy has been around the fact that the assumption of normally distributed random effects violates the basic principle of randomization in statistical inference.7 The hypothetical common variance of these so-called random effects would serve only as a nuisance variable if there were no random effects. The end result of the application of this nuisance variable to meta-analytic weights would then be to markedly increase estimator variance8 and equalize the weights9 through penalizing the larger studies.8,9 In this situation, the confidence interval widening due to the application of this common variance is not of much use because it cannot keep up with true estimator variance - a situation termed overdispersion in statistics. For a long time, statisticians have tried to correct this overdispersion through modification of the computation method for the confidence interval, but have failed to produce a satisfactory solution.6,10,11 The reason why they have not been successful is that they focused solely on the confidence interval, yet the only way to correct the problem is not just by widening the confidence interval but by avoiding meddling with the weights.8

 

This year, two groups working independently have attempted to address this issue from a different perspective.8,12 The new approach is based on the fact that the fixed-effect estimator has a lower true variance than the random-effects estimator, even under the constraint of heterogeneity.8 This is because without additional information from the studies beyond the effect size and standard error, the smallest estimator variance is obtained with inverse variance weights.13 Even though, the inverse variance weighted estimator is a biased estimator, the variance reduction due to inverse variance weighting (compared with the natural weighting) more than offsets any bias in this estimator. Therefore, the optimal approach to meta-analysis of heterogeneous studies is to use the fixed-effect model and adjust its confidence interval to allow for heterogeneity.8 In other words, widening the confidence interval should be undertaken independent of weighting in the meta-analysis. The approach taken with the random-effects model aims to create a more fully specified probability model but fails because the probability model is incorrectly specified.8 Also, the assumption itself of varying real effects across studies would make meta-analysis inappropriate.

 

The correct approach therefore is a fixed-effects meta-analysis with correction of overdispersion through a quasi-likelihood like approach.8 Two proposals for a scale parameter which would enable such an approach have recently been published. One has been called the weighted least squares meta-analysis model12 and the other has been called the inverse variance heterogeneity meta-analysis model.8 Only the latter model has been implemented into software that is readily available to researchers (http://www.epigear.com). Simulation studies demonstrate quite clearly the superiority of this approach vis-a-vis the random-effects modeling approach. Extensive simulations by my group confirm that the inverse variance heterogeneity model has a smaller variance and mean squared error when compared with the random-effects model.8 It also retains nominal coverage (95%) unlike the random-effects model that has a steep drop off in coverage.8 On both of these counts, the random-effects model therefore fails and because it underestimates the statistical error, it is likely that up to one in five truly negative results would be spuriously positive with this model (type I error up to 20%).8,10 It probably therefore is a good time now for organizations such as the Joanna Briggs Foundation, the Cochrane Collaboration, and the Campbell Collaboration to review their recommendations around use of the random-effects models in meta-analysis. This requires an urgent evaluation of these newer models and if they are indeed fit for purpose then the random-effects model must be abandoned.

 

Acknowledgements

There are no conflicts of interest.

 

References

 

1. DerSimonian R, Laird N. Meta-analysis in clinical trials. Control Clin Trials 1986; 7:177-188. [Context Link]

 

2. Higgins JP, Thompson SG, Spiegelhalter DJ. A re-evaluation of random-effects meta-analysis. J R Stat Soc Ser A Stat Soc 2009; 172:137-159. [Context Link]

 

3. Sanchez-Padilla E, Grais RF, Guerin PJ, et al. Burden of disease and circulating serotypes of rotavirus infection in sub-Saharan Africa: systematic review and meta-analysis. Lancet Infect Dis 2009; 9:567-576. [Context Link]

 

4. Feigin VL, Lawes CM, Bennett DA, et al. Worldwide stroke incidence and early case fatality reported in 56 population-based studies: a systematic review. Lancet Neurol 2009; 8:355-369. [Context Link]

 

5. Doi SA, Barendregt JJ, Rao C. An updated method for risk adjustment in outcomes research. Value Health 2014; 17:629-633. [Context Link]

 

6. Cornell JE, Mulrow CD, Localio R, et al. Random-effects meta-analysis ofinconsistent effects: a time for change. Ann Intern Med 2014; 160:267-270. [Context Link]

 

7. Overton RC. Comparison of fixed-effects and mixed (random-effects) models for meta-analysis tests of moderator variable effects. Psychol Methods 1998; 3:354-379. [Context Link]

 

8. Doi SA, Barendregt JJ, Khan S, et al. Advances in the meta-analysis of heterogeneous clinical trials I: the inverse variance heterogeneity model. Contemp Clin Trials 2015; doi:10.1016/j.cct.2015.05.009 [Online early]. [Context Link]

 

9. Al Khalaf MM, Thalib L, Doi SA. Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses. J Clin Epidemiol 2011; 64:119-123. [Context Link]

 

10. Noma H. Confidence intervals for a random-effects meta-analysis based on Bartlett-type corrections. Stat Med 2011; 30:3304-3312. [Context Link]

 

11. Brockwell SE, Gordon IR. A simple method for inference on an overall effect in meta-analysis. Stat Med 2007; 26:4531-4543. [Context Link]

 

12. Stanley TD, Doucouliagos H. Neither fixed nor random: weighted least squares meta-analysis. Stat Med 2015; 34:2116-2127. [Context Link]

 

13. Cochran WL, Carroll SP. A sampling investigation of the efficiency of weighting inversely as the estimated variance. Biometrics 1953; 9:447-459. [Context Link]