Slide set: File:Meta analyses and Systemic Reviews .pdf
In statistics, a meta-analysis is a sub-type of systematic reviews that combines the results of several studies that address a set of related research hypotheses. The first meta-analysis was performed by Karl Pearson in 1904, in an attempt to overcome the problem of reduced statistical power in studies with small sample sizes; analyzing the results from a group of studies can allow more accurate data analysis.
Although meta-analysis is widely used in epidemiology and evidence-based medicine today, a meta-analysis of a medical treatment was not published until 1955. In the 1970s, more sophisticated analytical techniques were introduced in educational research, starting with the work of Gene V. Glass, Frank L. Schmidt, and John E. Hunter.
The online Oxford English Dictionary lists the first usage of the term in the statistical sense as 1976 by Glass. The statistical theory surrounding meta-analysis was greatly advanced by the work of Nambury S. Raju, Larry V. Hedges, Ingram Olkin, John E. Hunter, and Frank L. Schmidt.
Uses in modern science
Because the results from different studies investigating different independent variables are measured on different scales, the dependent variable in a meta-analysis is some standardized measure of effect size. To describe the results of comparative experiments the usual effect size indicator is the standardized mean difference (d) which is the standard score equivalent to the difference between means, or an odds ratio if the outcome of the experiments is a dichotomous variable (success versus failure). A meta-analysis can be performed on studies that describe their findings in correlation coefficients, as for example, studies of the correlation between familial relationships and intelligence. In these cases, the correlation itself is the indicator of the effect size.
The method is not restricted to situations in which one or more variables is defined as "dependent." For example, a meta-analysis could be performed on a collection of studies each of which attempts to estimate the incidence of left-handedness in various groups of people.
Researchers should be aware that variations in sampling schemes can introduce heterogeneity to the result, which is the presence of more than one intercept in the solution. For instance, if some studies used 30mg of a drug, and others used 50mg, then we would plausibly expect two clusters to be present in the data, each varying around the mean of one dosage or the other. This can be modelled using a "random effects model."
Results from studies are combined using different approaches. One approach frequently used in meta-analysis in health care research is termed 'inverse variance method'. The average effect size across all studies is computed as a weighted mean, whereby the weights are equal to the inverse variance of each studies' effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. Other common approaches include the Mantel Haenszel method and the Peto method.
A free Excel-based calculator to perform Mantel Haenszel analysis is available at: http://www.pitt.edu/~super1/lecture/lec1171/014.htm.
They also have a free Excel-based Peto method calculator at: http://www.pitt.edu/~super1/lecture/lec1171/015.htm
Cochraine and other sources provide a useful discussion of the differences between these two approaches.
Question: Why not just add up all the results across studies ?
Answer: There is concern about Simpson's paradox.
Note, however that Mantel Haenszel analysis and Peto analysis introduce their own biases and distortions of the data results.
Modern meta-analysis does more than just combine the effect sizes of a set of studies. It can test if the studies' outcomes show more variation than the variation that is expected because of sampling different research participants. If that is the case, study characteristics such as measurement instrument used, population sampled, or aspects of the studies' design are coded. These characteristics are then used as predictor variables to analyze the excess variation in the effect sizes. Some methodological weaknesses in studies can be corrected statistically. For example, it is possible to correct effect sizes or correlations for the downward bias due to measurement error or restriction on score ranges.
Meta analysis leads to a shift of emphasis from single studies to multiple studies. It emphasises the practical importance of the effect size instead of the statistical significance of individual studies. This shift in thinking has been termed Metaanalytic thinking.
The results of a meta-analysis are often shown in a forest plot.
Studies of automatic literature searching show that important search concepts are the exposure (E) and outcome (O) in the study abstract.
Risk of bias in studies
Frameworks exist for assessing the quality of individual studies and groups of studies:
- Randomized controlled trials: Cochrane Risk of Bias tool
Assessing the quality of a trial by only using the published report may lead to inaccurate conclusions.
A weakness of the method is that sources of bias are not controlled by the method. A good meta-analysis of badly designed studies will still result in bad statistics. Robert Slavin has argued that only methodologically sound studies should be included in a meta-analysis, a practice he calls 'best evidence meta-analysis'. Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size. Another weakness of the method is the heavy reliance on published studies, which may increase the effect as it is very hard to publish studies that show no significant results. This publication bias or "file-drawer effect" (where non-significant studies end up in the desk drawer instead of in the public domain) should be seriously considered when interpreting the outcomes of a meta-analysis. Because of the risk of publication bias, many meta-analyses now include a "failsafe N" statistic that calculates the number of studies with null results that would need to be added to the meta-analysis in order for an effect to no longer be reliable.
Small study effect and publication bias
The small study effect is the observation that small studies tend to report more positive results. This is especially a threat when the original studies in a meta-analysis are less than 50 patients in size.
Measuring consistency of study results
Consistency can be statistically tested using either the Cochran's Q or I2. The I2 is the "percentage of total variation across studies that is due to heterogeneity rather than chance." These numbers are usually displayed for each group of studies on a Forest plot.
The following has been proposed for interpreting I2:
- Low heterogeneity is I2 = 25%
- Moderate heterogeneity is I2 = 50%
- High heterogeneity is I2 = 75%
- 0%-40%: might not be important
- 30%-60%: may represent moderate heterogeneity
- 50%-90%: may represent substantial heterogeneity
- 75%-100%: considerable heterogeneity
Statistical methods exist for assessing the importance of subgroups.
Types of meta-analyses
A conventional meta-analysis is also called a pairwise meta-analysis.
A network meta-analysis and Bayesian hierarchical models pool studies in order to compare to treatments that have not been directly compared. Network meta-analyses are commonly not well performed. Network meta-analyses of both randomized controlled trials and diagnostic test assessments can have misleading results. Network meta-analyses have been conducted by the Cochrane Collaboration.
Network meta-analyses are of two types:
Individual patient data meta-analysis
Individual patient data meta-analysis can be done with a one-stage or two-stage approach. The two-stage approach, which does not require true pooling of individual patient data, can yield very similar results if confounders or modulators are statistically controlled for.
Narrative or qualitative data
- Classic Glaserian grounded theory This method may have difficulty in the goal is explicating.
- Straussian grounded theory. This approach may be easiest for new projects.
- Constructivist grounded theory
Problems with meta-analyses
Obsolescence and duplications
The conclusions of meta-analyses may be mitigated by research published after the search date of the meta-analysis. This may occur by the time the meta-analysis has been published. Strategies have been developed for updating meta-analyses.
Meta-analyses may also be redundant.
- Tsafnat G, Glasziou P, Karystianis G, Coiera E (2018). "Automated screening of research studies for systematic reviews using study characteristics". Syst Rev. 7 (1): 64. doi:10.1186/s13643-018-0724-7. PMC 5918752. PMID 29695296.
- openMetaAnalysis contributors. Assessing quality of individual studies (PRISMA Item 12)
- Vale CL, Tierney JF, Burdett S (2013). "Can trial quality be reliably assessed from published reports of cancer trials: evaluation of risk of bias assessments in systematic reviews". BMJ. 346: f1798. doi:10.1136/bmj.f1798. PMID 23610376.
- Dechartres A, Trinquart L, Boutron I, Ravaud P (2013). "Influence of trial sample size on treatment effect estimates: meta-epidemiological study". BMJ. 346: f2304. doi:10.1136/bmj.f2304. PMC 3634626. PMID 23616031.
- Nüesch E, Trelle S, Reichenbach S, Rutjes AW, Tschannen B, Altman DG; et al. (2010). "Small study effects in meta-analyses of osteoarthritis trials: meta-epidemiological study". BMJ. 341: c3515. doi:10.1136/bmj.c3515. PMC 2905513. PMID 20639294.
- Sterne JA, Egger M, Smith GD (2001). "Systematic reviews in health care: Investigating and dealing with publication and other biases in meta-analysis". BMJ. 323 (7304): 101–5. PMC 1120714. PMID 11451790.
- F. Richy, O. Ethgen, O. Bruyere, F. Deceulaer & J. Reginster : From Sample Size to Effect-Size: Small Study Effect Investigation (SSEi) . The Internet Journal of Epidemiology. 2004 Volume 1 Number 2
- Higgins JP, Thompson SG, Deeks JJ, Altman DG (2003). "Measuring inconsistency in meta-analyses". BMJ. 327 (7414): 557–60. doi:10.1136/bmj.327.7414.557. PMC 192859. PMID 12958120. Unknown parameter
- Higgins JP, Thompson SG (2002). "Quantifying heterogeneity in a meta-analysis". Stat Med. 21 (11): 1539–58. doi:10.1002/sim.1186. PMID 12111919.
- Fleiss JL (1986). "Analysis of data from multiclinic trials". Control Clin Trials. 7 (4): 267–75. PMID 3802849. Unknown parameter
- Dickersin K, Berlin JA (1992). "Meta-analysis: state-of-the-science". Epidemiol Rev. 14: 154–76. PMID 1289110.
- Higgins JPT, Green S, ed. (2008). "Cochrane Handbook for Systematic Reviews of Interventions". Cochrane Collaboration. Retrieved 2016-10-04.
- Ioannidis JP, Patsopoulos NA, Evangelou E (2007). "Uncertainty in heterogeneity estimates in meta-analyses". BMJ. 335 (7626): 914–6. doi:10.1136/bmj.39343.408449.80. PMID 17974687.
- Ioannidis JP (2008). "Interpretation of tests of heterogeneity and bias in meta-analysis". J Eval Clin Pract. 14 (5): 951–7. doi:10.1111/j.1365-2753.2008.00986.x. PMID 19018930.
- Altman DG, Bland JM (2003). "Interaction revisited: the difference between two estimates". BMJ. 326 (7382): 219. PMC 1125071. PMID 12543843.
- Lumley T (2002). "Network meta-analysis for indirect treatment comparisons". Stat Med. 21 (16): 2313–24. doi:10.1002/sim.1201. PMID 12210616. Unknown parameter
- Lu G, Ades AE. Combination of direct and indirect evidence in mixed treatment comparisons. Stat Med. 2004 Oct 30;23(20):3105-24. PMID 15449338
- Salanti G, Kavvoura FK, Ioannidis JP (2008). "Exploring the geometry of treatment networks". Ann. Intern. Med. 148 (7): 544–53. PMID 18378949. Unknown parameter
- Ioannidis JP (2009). "Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses". CMAJ. 181 (8): 488–93. doi:10.1503/cmaj.081086. PMC 2761440. PMID 19654195.
- Song F, Loke YK, Walsh T, Glenny AM, Eastwood AJ, Altman DG (2009). "Methodological problems in the use of indirect comparisons for evaluating healthcare interventions: survey of published systematic reviews". BMJ. 338: b1147. PMC 2665205. PMID 19346285.
- Kent DM, Thaler DE (2008). "Stroke prevention--insights from incoherence". N. Engl. J. Med. 359 (12): 1287–9. doi:10.1056/NEJMe0806806. PMID 18753641. Unknown parameter
- Thijs V, Lemmens R, Fieuws S. Network meta-analysis: simultaneous meta-analysis of common antiplatelet regimens after transient ischaemic attack or stroke. ur Heart J. 2008 May;29(9):1086-92. Epub 2008 Mar 17. PMID 18349026
- Chou, Roger (2009-02-01). "Gabapentin Versus Tricyclic Antidepressants for Diabetic Neuropathy and Post-Herpetic Neuralgia: Discrepancies Between Direct and Indirect Meta-Analyses of Randomized Controlled Trials". Journal of General Internal Medicine. 24 (2): 178–188. doi:10.1007/s11606-008-0877-5. PMID 19089502. Retrieved 2009-01-26. Unknown parameter
- Takwoingi, Yemisi (2013-04-02). "Empirical Evidence of the Importance of Comparative Studies of Diagnostic Test Accuracy". Annals of Internal Medicine. 158 (7): 544–554. doi:10.7326/0003-4819-158-7-201304020-00006. ISSN 0003-4819. Retrieved 2013-04-01. Unknown parameter
- Singh JA, Christensen R, Wells GA, Suarez-Almazor ME, Buchbinder R, Lopez-Olivo MA; et al. (2009). "A network meta-analysis of randomized controlled trials of biologics for rheumatoid arthritis: a Cochrane overview". CMAJ. 181 (11): 787–96. doi:10.1503/cmaj.091391. PMC 2780484. PMID 19884297.
- Singh JA, Christensen R, Wells GA, Suarez-Almazor ME, Buchbinder R, Lopez-Olivo MA; et al. (2009). "Biologics for rheumatoid arthritis: an overview of Cochrane reviews". Cochrane Database Syst Rev (4): CD007848. doi:10.1002/14651858.CD007848.pub2. PMID 19821440.
- Scotti L, Rea F, Corrao G (2018). "One-stage and two-stage meta-analysis of individual participant data led to consistent summarized evidence: lessons learned from combining multiple databases". J Clin Epidemiol. 95: 19–27. doi:10.1016/j.jclinepi.2017.11.020. PMID 29197646.
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O (2004). "Diffusion of innovations in service organizations: systematic review and recommendations". Milbank Q. 82 (4): 581–629. doi:10.1111/j.0887-378X.2004.00325.x. PMC 2690184. PMID 15595944.
- Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O, Peacock R (2005). "Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review". Soc Sci Med. 61 (2): 417–30. doi:10.1016/j.socscimed.2004.12.001. PMID 15893056.
- Wong G, Greenhalgh T, Westhorp G, Buckingham J, Pawson R (2013). "RAMESES publication standards: meta-narrative reviews". BMC Med. 11: 20. doi:10.1186/1741-7015-11-20. PMC 3558334. PMID 23360661.
- Rieger KL (2019). "Discriminating among grounded theory approaches". Nurs Inq. 26 (1): e12261. doi:10.1111/nin.12261. PMC 6559166 Check
|pmc=value (help). PMID 30123965.
- Glaser, Barney G.; Strauss, Anselm L. (1967). The discovery of grounded theory : strategies for qualitative research. Chicago. ISBN 978-0-202-30028-3. OCLC 253912.
- Corbin, Juliet M.; Strauss, Anselm L. (2015). Basics of qualitative research : techniques and procedures for developing grounded theory (4 ed.). Thousand Oaks, California. ISBN 1-4129-9746-1. OCLC 898334340.
- Hanson JL, Balmer DF, Giardino AP (2011). "Qualitative research methods for medical educators". Acad Pediatr. 11 (5): 375–86. doi:10.1016/j.acap.2011.05.001. PMID 21783450.
- Seltz LB, Preloger E, Hanson JL, Lane L (2016). "Ward Rounds With or Without an Attending Physician: How Interns Learn Most Successfully". Acad Pediatr. 16 (7): 638–44. doi:10.1016/j.acap.2016.05.149. PMID 27283038.
- Patton, Michael Quinn (2015). Qualitative research & evaluation methods : integrating theory and practice. Thousand Oaks, California. ISBN 1-4129-7212-4. OCLC 890080219.
- Rose AJ, Petrakis BA, Callahan P, Mambourg S, Patel D, Hylek EM; et al. (2012). "Organizational characteristics of high- and low-performing anticoagulation clinics in the Veterans Health Administration". Health Serv Res. 47 (4): 1541–60. doi:10.1111/j.1475-6773.2011.01377.x. PMC 3401398. PMID 22299722.
- Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D (2007). "How quickly do systematic reviews go out of date? A survival analysis". Ann. Intern. Med. 147 (4): 224–33. PMID 17638714. Unknown parameter
- "The use of older studies in meta-analyses of medical interventions: a survey" (Text.Serial.Journal). 2009-05-11. Retrieved 2009-06-04.
- Sampson M, Shojania KG, McGowan J; et al. (2008). "Surveillance search techniques identified the need to update systematic reviews". J Clin Epidemiol. 61 (8): 755–62. doi:10.1016/j.jclinepi.2007.10.003. PMID 18586179. Unknown parameter
- Siontis, K. C. (2013-07-19). "Overlapping meta-analyses on the same topic: survey of published studies". BMJ. 347 (jul19 1): f4501–f4501. doi:10.1136/bmj.f4501. ISSN 1756-1833. Retrieved 2013-07-22. Unknown parameter
- Epidemiologic methods
- Educational psychology
- Fisher's method for combining independent tests of significance
- Galbraith plot
- Selection bias
- Simpson's paradox
- Study heterogeneity
- Systematic review
- Metaanalytic thinking
- Meta, the word or prefix
- Effect Size and Meta-Analysis
- Effect Size and Meta-Analysis Software
- Introduction to meta-analysis for systematic reviewers
- Meta-Analysis: Methods of Accumulating Results Across Research Domains
- Meta-analysis blog
- Meta-Analysis in Educational Research
- Meta-Analysis Research on Science Instruction
- The Cochrane Library
- What is meta-analysis? (Hayward Medical Communications)
- MIX: Free Software for Meta-analysis of Causal Research Data
- Psychwiki.com article on meta-analysis