Publication bias

Jump to navigation Jump to search

Publication bias arises from the tendency for researchers and editors to handle experimental results that are positive (they found something) differently from results that are negative (found that something did not happen) or inconclusive.

Publication bias has been documented to occur in studies of medical interventions.[1] Publication bias, or the related outcome reporting bias (see below), may occur in 25%[2] to 60% of some types of articles.[3][4][5]

Publication bias may also occur in studies of diagnostic tests.[6] Publication bias may be more of a problem in diagnostic test research than in randomized controlled trials because studies of diagnostic tests can be secondary analyses of databases and do not have to be registered prior to publication.[7]

There may be publication bias in studies of publication bias[8].

Definition

"Publication bias occurs when the publication of research results depends on their nature and direction."[9]

Positive results bias, a type of publication bias, occurs when authors are more likely to submit, or editors accept, positive than null (negative or inconclusive) results.[10] A related term, "the file drawer problem", refers to the tendency for those negative or inconclusive results to remain hidden and unpublished.[11] Even a small number of studies lost in the file drawer can result in a significant bias.[12].

Selective reporting bias

Selective reporting bias, or outcome reporting bias, occurs when several outcomes within a trial are measured but these are reported selectively depending on the strength and direction of those results.[13]

Selective reporting also occurs when "publications ignored the results of intention to treat analyses and reported the more favourable per protocol analyses only."[14]

Related terms that have been coined are p-hacking[15] and HARKing (Hypothesizing After the Results are Known)[16].

Omitted-variable bias occurs when an "adjusting variable has an own effect on the dependent variable and is correlated with the variable of interest, excluding this adjusting variable from the regression induces omitted-variable bias".[17]

Industry publications may be more likely to interpret their results as favorable to their product although analyses of the primary outcome may be less supportive[18]>

For example, skeptics often argue that there is (or at least was) a strong publication bias in the field of parapsychology, leading to a File drawer problem.

Small study effect

The small study effect is closely related. The small study effect is the observation that small studies tend to report more positive results.[19][20] This is especially a threat when the original studies in a meta-analysis are less than 50 patients in size.[21]

Examples

Suppose that several studies about the influence of power lines on cancer are performed. They are admitted for publication only if they show a correlation with a 95% confidence level. If only the positive results make it to publication, because negative results are just shelved, we do not know how many studies were performed, so it is possible that all the published results are type I errors.

Detection

P-curves may[15] or may not[17] be able to detect p-hacking.

The caliper test may be able to detect publication biases[22]

Effect on meta-analysis

The effect of this is that published studies may not be truly representative of all valid studies undertaken, and this bias may distort meta-analyses and systematic reviews of large numbers of studies - on which evidence-based medicine, for example, increasingly relies. The problem may be particularly significant when the research is sponsored by entities that may have a financial interest in achieving favourable results.

Those undertaking meta-analyses and systematic reviews need to take account of publication bias in the methods they use for identifying the studies to include in the review. Among other techniques to minimise the effects of publication bias, they may need to perform a thorough search for unpublished studies, and to use such analytical tools as a funnel plot to quantify the effects of bias.

Possible examples

An example of probable publication bias is in the studies of glucosamine and chondroitin for treatment of osteoarthritis. In an initial meta-analysis, the authors noted evidence of publication bias during examination of the results.[23] A subsequent large randomized controlled trial[24] and meta-analyses including the large trial were negative.[25][26]

Another example is the selective publication of randomized controlled trials of antidepressants[27] or of positive trials in general[28].

One study[29] compared Chinese and non-Chinese studies of gene-disease associations and found that "Chinese studies in general reported a stronger gene-disease association and more frequently a statistically significant result"[30]. One possible interpretation of this result is selective publication (publication bias).

Ioannidis has inventoried factors that should alert readers to risks of publication bias [31].

Prevention

Study registration

In September 2004, editors of several prominent medical journals (including the New England Journal of Medicine, The Lancet, Annals of Internal Medicine, and JAMA) announced that they would no longer publish results of research unless that research was registered in a public database from the start.[32] In this way, negative results should no longer be able to disappear.

Independent data analysis

COnstraints on authors by industry are common[33].

In 2002, the Journal of the American Medical Association, (JAMA), added to its instructions to authors, "For industry-sponsored research studies, an investigator who is not an employee of the sponsoring company, and who ideally is the principal investigator, should provide a statement that he or she "had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analyses."[34]

In 2010, JAMA added "Moreover, in industry-sponsored studies, data collection and data management should be conducted primarily or solely by the academic investigators independently of the study sponsor or other for-profit research organization, and with additional monitoring and oversight, such as under the auspices of an academic independent data and safety monitoring committee."[35]

In 2012, JAMA stated, "all reports containing original data, regardless of funding source, analysis of the data must be conducted by a statistician at an academic institution, rather than by statisticians employed by the sponsor or by a commercial contract research organization. The biostatistician should have full access to the entire raw data set and must be a faculty member at a medical school or academic center (such as a university) or an employee of a government research institute, such that the academic organization has oversight over the person conducting the analysis"[36]


This policy resulted in JAMA publishing fewer trials and fewer industry-funded trials[37].

Accordingly, SPIRIT guidelines for trial protocols includes[38]:

  • Item 29: Statement of who will have access to the final trial dataset, and disclosure of contractual agreements that limit such access for investigators

Accordingly, the International Committee of Medical Journal Editors (ICMJE) has[39]:

  • Recommended "Authors should avoid entering into agreements with study sponsors, both for-profit and nonprofit, that interfere Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals with authors’ access to all of the study’s data or that interfere with their ability to analyze and interpret the data and to prepare and publish manuscripts independently when and where they choose"[40].
  • Suggested that journals add requests of their authors to state “I had full access to all of the data in this study and I take complete responsibility for the integrity of the data and the accuracy of the data analysis.”

However, most medical schools do not examine contracts between academics and industry[41].

Academics report conflict and lack of data access[42][43].

See also

External links

References

  1. Dickersin K, Min YI, Meinert CL (1992). "Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards". JAMA. 267 (3): 374–8. PMID 1727960.
  2. Turner, Erick H. (2012-03-20). "Publication Bias in Antipsychotic Trials: An Analysis of Efficacy Comparing the Published Literature to the US Food and Drug Administration Database". PLoS Med. 9 (3): e1001189. doi:10.1371/journal.pmed.1001189. Retrieved 2012-03-21. Unknown parameter |coauthors= ignored (help)
  3. Eyding D, Lelgemann M, Grouven U, Härter M, Kromp M, Kaiser T; et al. (2010). "Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials". BMJ. 341: c4737. doi:10.1136/bmj.c4737. PMID 20940209.
  4. Decullier E, Lhéritier V, Chapuis F (2005). "Fate of biomedical research protocols and publication bias in France: retrospective cohort study". BMJ. 331 (7507): 19. doi:10.1136/bmj.38488.385995.8F. PMC 558532. PMID 15967761.
  5. Chan AW, Krleza-Jerić K, Schmid I, Altman DG (2004). "Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research". CMAJ. 171 (7): 735–40. doi:10.1503/cmaj.1041086. PMC 517858. PMID 15451835.
  6. Owens DK, Holodniy M, Garber AM; et al. (1996). "Polymerase chain reaction for the diagnosis of HIV infection in adults. A meta-analysis with recommendations for clinical practice and study design". Ann. Intern. Med. 124 (9): 803–15. PMID 8610949. Unknown parameter |month= ignored (help)
  7. Irwig L, Macaskill P, Glasziou P, Fahey M (1995). "Meta-analytic methods for diagnostic test accuracy". J Clin Epidemiol. 48 (1): 119–30, discussion 131–2. PMID 7853038. Unknown parameter |month= ignored (help)
  8. Dubben HH, Beck-Bornholdt HP (2005). "Systematic review of publication bias in studies on publication bias". BMJ. 331 (7514): 433–4. doi:10.1136/bmj.38478.497164.F7. PMC 1188109. PMID 15937056.
  9. K. Dickersin (1990). "The existence of publication bias and risk factors for its occurrence". JAMA. 263 (10): 1385&ndash, 1359. PMID 2406472. Unknown parameter |month= ignored (help)
  10. D. L. Sackett (1979). "Bias in analytic research". J Chronic Dis. 32 (1–2). PMID 447779. Text " pages 51–63 " ignored (help)
  11. Robert Rosenthal (1979). "The file drawer problem and tolerance for null results". Psychological Bulletin. 86 (3): 638&ndash, 641. Unknown parameter |month= ignored (help)
  12. Jeffrey D. Scargle (2000). "Publication Bias: The "File-Drawer Problem" in Scientific Inference". Journal of Scientific Exploration. 14 (2): 94&ndash, 106.
  13. Chang L, Dhruva SS, Chu J, Bero LA, Redberg RF (2015). "Selective reporting in trials of high risk cardiovascular devices: cross sectional comparison between premarket approval summaries and published reports". BMJ. 350: h2613. doi:10.1136/bmj.h2613. PMC 4462712. PMID 26063311.
  14. Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003). "Evidence b(i)ased medicine--selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications". BMJ. 326 (7400): 1171–3. doi:10.1136/bmj.326.7400.1171. PMC 156459. PMID 12775615.
  15. 15.0 15.1 Simonsohn U, Nelson LD, Simmons JP (2014). "P-curve: a key to the file-drawer". J Exp Psychol Gen. 143 (2): 534–47. doi:10.1037/a0033242. PMID 23855496.
  16. Kerr NL (1998). "HARKing: hypothesizing after the results are known". Pers Soc Psychol Rev. 2 (3): 196–217. doi:10.1207/s15327957pspr0203_4. PMID 15647155.
  17. 17.0 17.1 Bruns SB, Ioannidis JP (2016). "p-Curve and p-Hacking in Observational Research". PLoS One. 11 (2): e0149144. doi:10.1371/journal.pone.0149144. PMC 4757561. PMID 26886098.
  18. Djulbegovic B, Kumar A, Miladinovic B, Reljic T, Galeb S, Mhaskar A; et al. (2013). "Treatment success in cancer: industry compared to publicly sponsored randomized controlled trials". PLoS One. 8 (3): e58711. doi:10.1371/journal.pone.0058711. PMC 3605423. PMID 23555593. (See Figure 2)
  19. Nüesch E, Trelle S, Reichenbach S, Rutjes AW, Tschannen B, Altman DG; et al. (2010). "Small study effects in meta-analyses of osteoarthritis trials: meta-epidemiological study". BMJ. 341: c3515. doi:10.1136/bmj.c3515. PMC 2905513. PMID 20639294.
  20. Sterne JA, Egger M, Smith GD (2001). "Systematic reviews in health care: Investigating and dealing with publication and other biases in meta-analysis". BMJ. 323 (7304): 101–5. PMC 1120714. PMID 11451790.
  21. F. Richy, O. Ethgen, O. Bruyere, F. Deceulaer & J. Reginster : From Sample Size to Effect-Size: Small Study Effect Investigation (SSEi) . The Internet Journal of Epidemiology. 2004 Volume 1 Number 2
  22. Gerber AS, Malhotra N. Publication bias in empirical sociological research: Do arbitrary significance levels distort published results?. Sociological Methods & Research. 2008 Aug;37(1):3-0. doi:10.1177/0049124108318973
  23. McAlindon TE, LaValley MP, Gulin JP, Felson DT (2000). "Glucosamine and chondroitin for treatment of osteoarthritis: a systematic quality assessment and meta-analysis". JAMA. 283 (11): 1469–75. PMID 10732937.
  24. Clegg DO, Reda DJ, Harris CL; et al. (2006). "Glucosamine, chondroitin sulfate, and the two in combination for painful knee osteoarthritis". N. Engl. J. Med. 354 (8): 795–808. doi:10.1056/NEJMoa052771. PMID 16495392.
  25. Vlad SC, LaValley MP, McAlindon TE, Felson DT (2007). "Glucosamine for pain in osteoarthritis: why do trial results differ?". Arthritis Rheum. 56 (7): 2267–77. doi:10.1002/art.22728. PMID 17599746.
  26. Reichenbach S, Sterchi R, Scherer M; et al. (2007). "Meta-analysis: chondroitin for osteoarthritis of the knee or hip". Ann. Intern. Med. 146 (8): 580–90. PMID 17438317.
  27. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R (2008). "Selective publication of antidepressant trials and its influence on apparent efficacy". N. Engl. J. Med. 358 (3): 252–60. doi:10.1056/NEJMsa065779. PMID 18199864.
  28. Bourgeois FT, Murthy S, Mandl KD (2010). "Outcome reporting among drug trials registered in ClinicalTrials.gov". Ann Intern Med. 153 (3): 158–66. doi:10.1059/0003-4819-153-3-201008030-00006. PMID 20679560.
  29. Zhenglun Pan, Thomas A. Trikalinos, Fotini K. Kavvoura, Joseph Lau, John P.A. Ioannidis, "Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature". PLoS Medicine, 2(12):e334, 2005 December.
  30. Jin Ling Tang, "Selection Bias in Meta-Analyses of Gene-Disease Associations", PLoS Medicine, 2(12):e409, 2005 December.
  31. Ioannidis J (2005). "Why most published research findings are false". PLoS Med. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMID 16060722.
  32. (The Washington Post) (2004-09-10). "Medical journal editors take hard line on drug research". smh.com.au. Retrieved 2008-02-03.
  33. Gøtzsche PC, Hróbjartsson A, Johansen HK, Haahr MT, Altman DG, Chan AW (2006). "Constraints on publication rights in industry-initiated clinical trials". JAMA. 295 (14): 1645–6. doi:10.1001/jama.295.14.1645. PMID 16609085.
  34. DeAngelis CD, Fontanarosa PB, Flanagin A (2001). "Reporting financial conflicts of interest and relationships between investigators and research sponsors". JAMA. 286 (1): 89–91. doi:10.1001/jama.286.1.89. PMID 11434832.
  35. DeAngelis CD, Fontanarosa PB (2010). "Ensuring integrity in industry-sponsored research: primum non nocere, revisited". JAMA. 303 (12): 1196–8. doi:10.1001/jama.2010.337. PMID 20332409.
  36. Bauchner, Howard; Fontanarosa, Phil B. (2012). "Update on JAMA's Policies on Conflicts of Interest, Trial Registration, Embargo, and Data Timeliness, Access, and Analysis". JAMA. 308 (2): 186. doi:10.1001/jama.2012.7926. ISSN 0098-7484.
  37. Wager E, Mhaskar R, Warburton S, Djulbegovic B (2010). "JAMA published fewer industry-funded studies after introducing a requirement for independent statistical analysis". PLoS One. 5 (10): e13591. doi:10.1371/journal.pone.0013591. PMC 2962640. PMID 21042585.
  38. Chan AW, Tetzlaff JM, Gøtzsche PC, Altman DG, Mann H, Berlin JA; et al. (2013). "SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials". BMJ. 346: e7586. doi:10.1136/bmj.e7586. PMC 3541470. PMID 23303884.
  39. "Uniform requirements for manuscripts submitted to biomedical journals: Writing and editing for biomedical publication". J Pharmacol Pharmacother. 1 (1): 42–58. 2010. PMC 3142758. PMID 21808590.
  40. International Committee of Medical Journal Editors (2019). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Available at http://www.icmje.org/icmje-recommendations.pdf
  41. Mello MM, Murtagh L, Joffe S, Taylor PL, Greenberg Y, Campbell EG (2018). "Beyond financial conflicts of interest: Institutional oversight of faculty consulting agreements at schools of medicine and public health". PLoS One. 13 (10): e0203179. doi:10.1371/journal.pone.0203179. PMC 6205599. PMID 30372431.
  42. Rasmussen K, Bero L, Redberg R, Gøtzsche PC, Lundh A (2018). "Collaboration between academics and industry in clinical trials: cross sectional study of publications and survey of lead academic authors". BMJ. 363: k3654. doi:10.1136/bmj.k3654. PMC 6169401. PMID 30282703.
  43. Kasenda B, von Elm E, You JJ, Blümle A, Tomonaga Y, Saccilotto R; et al. (2016). "Agreements between Industry and Academia on Publication Rights: A Retrospective Study of Protocols and Publications of Randomized Clinical Trials". PLoS Med. 13 (6): e1002046. doi:10.1371/journal.pmed.1002046. PMC 4924795. PMID 27352244.

de:Publikationsbias he:אפקט המגירה nl:Publicatiebias


Template:WikiDoc Sources