Skip to main content
  • Research article
  • Open access
  • Published:

Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination

Abstract

Background

As the number of systematic reviews is growing rapidly, we systematically investigate whether meta-analyses published in leading medical journals present an outline of available evidence by referring to previous meta-analyses and systematic reviews.

Methods

We searched PubMed for recent meta-analyses of pharmacological treatments published in high impact factor journals. Previous systematic reviews and meta-analyses were identified with electronic searches of keywords and by searching reference sections. We analyzed the number of meta-analyses and systematic reviews that were cited, described and discussed in each recent meta-analysis. Moreover, we investigated publication characteristics that potentially influence the referencing practices.

Results

We identified 52 recent meta-analyses and 242 previous meta-analyses on the same topics. Of these, 66% of identified previous meta-analyses were cited, 36% described, and only 20% discussed by recent meta-analyses. The probability of citing a previous meta-analysis was positively associated with its publication in a journal with a higher impact factor (odds ratio, 1.49; 95% confidence interval, 1.06 to 2.10) and more recent publication year (odds ratio, 1.19; 95% confidence interval 1.03 to 1.37). Additionally, the probability of a previous study being described by the recent meta-analysis was inversely associated with the concordance of results (odds ratio, 0.38; 95% confidence interval, 0.17 to 0.88), and the probability of being discussed was increased for previous studies that employed meta-analytic methods (odds ratio, 32.36; 95% confidence interval, 2.00 to 522.85).

Conclusions

Meta-analyses on pharmacological treatments do not consistently refer to and discuss findings of previous meta-analyses on the same topic. Such neglect can lead to research waste and be confusing for readers. Journals should make the discussion of related meta-analyses mandatory.

Peer Review reports

Background

Systematic reviews and meta-analyses represent a high level of evidence and are invaluable to health professionals in synthesizing the results of medical research [1]. The number of systematic reviews is growing rapidly - in 2010 approximately 11 such studies were published per day, which corresponds to the number of randomized controlled trials (RCTs) published three decades ago [2]. Because of this exponential growth in publication rates, many meta-analysis authors may not discuss the results of previous meta-analyses and systematic reviews on the same topic - in a manner analogous to authors of RCTs not referring to a substantial portion of other relevant RCTs [3] or systematic reviews [4,5]. This can be very confusing for readers and cause waste in research resources [6], including waste in study planning [7], design, and conduct [8] as well as leading to unnecessary duplications [9] and incomplete reporting [10,11].

To grasp the importance of such neglect, imagine clinicians seeking a treatment solution for a patient’s specific medical problem. They find two similar meta-analyses with discordant results. If the newer article does not refer to the older, the readers are given no explanation about the possible reasons for this discrepancy. Which article should they trust more? Their level of uncertainty is higher than before reading the authoritative articles and an evidence-based decision regarding their patients’ treatment even more difficult. Not referring to important related research is also against the principles of evidence-based medicine, because meta-analysts agree that all available evidence should be systematically searched and reviewed in an unbiased manner [12].

Moreover, as for all types of research, the question a meta-analysis is trying to answer should be relevant [7-9]. If the question has been already answered in a previous meta-analysis, the authors should clearly justify why they decided to perform a similar analysis again. Is it a replication, an update, or maybe just an unnecessary duplication?

To provide patients, clinicians, and policymakers with the most useful information about a clinical question, meta-analyses should not neglect previous systematic reviews about the same topic. This will not only help to provide a more complete understanding of the clinical problem, but also to avoid research waste and biased results.

We report a systematic investigation of whether recent meta-analyses published in the leading medical journals cite, describe, and discuss previous meta-analyses and systematic reviews on the same topic. We also analyze factors that are likely to be associated with this phenomenon.

Methods

First, we identified a sample of recent meta-analyses, then for each included recent article we performed a separate systematic search to find similar previous meta-analyses and systematic reviews. Our goal was to estimate what proportion of the previous meta-analyses and systematic reviews was cited, described, and discussed by the recent meta-analyses. We also investigated potential predictors of citing, describing, and discussing. We initially published a protocol at our institutional website [13].

Selection of the recent meta-analyses

We searched PubMed, combining the names of the six general medical journals with the highest impact factors according to Journal Citation Records, 2013 edition (New England Journal of Medicine, The Lancet, JAMA: The Journal of the American Medical Association, Annals of Internal Medicine, PLOS Medicine, British Medical Journal) and ‘meta-analysis’ as publication type (see Additional file 1). To produce a more homogenous sample we only included meta-analyses on pharmacological treatments. The original search was completed in March 2013. We aimed to include at least 50 published meta-analyses. We expanded the search to January 2012 to meet this criterion.

We then systematically assessed citation habits of these recent meta-analyses towards previous meta-analyses and systematic reviews on the same topic.

Selection of the previous meta-analyses and systematic reviews

For each recent meta-analysis we searched PubMed for previous meta-analyses and systematic reviews on the same topic (unlike recent articles, previous studies also included systematic reviews without meta-analysis), combining the keywords provided by the recent articles with ‘meta-analysis’ or ‘systematic review’ as publication type (see Additional file 2). The keywords were based on the characteristics of the participating population and the intervention(s) used. The reference lists of all included studies were also screened. We compared PICO questions between the recent and the previous articles in order to make sure that they focus on a similar group of participants (P) and use similar interventions (I), comparators (C), and outcomes (O) [14]. Previous articles which had any of the PICO questions completely different from the corresponding questions in the recent article were excluded. Additionally, for included previous articles we calculated a ‘similarity score,’ such as for each PICO question one or zero points were given, depending whether all of the four PICO questions were identical to the corresponding questions from the recent article (one point per question was given if that was the case) or whether the questions were only overlapping. That was the case when the criteria for each PICO question were only partially similar, for example when multiple outcomes were used and only some of them were employed by the previous study (in such a case zero points were given, but the study was not excluded). For more details and examples of this similarity score see Additional file 3. We also excluded articles published more than 10 years or less than 1 year from the time of publication of the recent meta-analysis, unless they were cited in the recent meta-analysis. This criterion ensured that we did not analyse outdated material and it also does justice to the fact that the publication process can take a long time.

Statistical analysis of predictors

Our primary question was to estimate what proportion of the previous meta-analyses and systematic reviews was cited (that is, whether a reference to the previous article was provided by the recent study), described (that is, whether any information about the results of the previous article was given), and discussed (that is, whether the results from the previous article were related to the results or conclusions from the recent study) by the recent meta-analysis. Table 1 provides specific examples for each definition.

Table 1 Cited versus described versus discussed: definitions and examples

We also investigated potential predictors of citing, describing, and discussing previous meta-analyses and systematic reviews by recent meta-analyses using mixed-effects logistic regression analysis in R.

Recent article-specific predictors included: journal title, medical discipline, journal impact factor (based on Journal Citation Records, 2013 edition), and quality of the systematic review as measured with the AMSTAR score (a measurement tool for the assessment of the methodological quality of systematic reviews) [21].

Previous article-specific predictors included: level of similarity of the review question (based on comparison of PICO questions between recent and previous article), journal impact factor, publication year, article type (systematic review using meta-analytic methods versus not), and concordance of results between recent and previous articles (similar results versus different results). Results were judged as ‘similar’ when the direction of the effect was the same, irrespective of the effect size. In general, concordance of results was based on the major findings of the study (primary outcome, if possible) and, if necessary, particular results and conclusions were compared, including strength of evidence. In case of previous systematic reviews without meta-analysis, concordance was based on the main message of the paper, that is, the authors’ summary and/or conclusions (illustrating examples are presented in Additional file 4). To avoid confusion we emphasize that the term ‘similarity’ refers to a comparison of the recent and previous articles in terms of the review questions, whereas the term ‘concordance’ refers to a comparison of recent and previous article results.

In a sensitivity analysis we excluded previous articles published before 2010 to check whether the general pattern of results changed in the newer papers.

BH piloted the analysis on a sample of 10 studies, selecting and extracting all the data. AP independently extracted a random sample of 25%. An inter-rater reliability analysis using the Kappa coefficient was performed to determine the consistency among raters [22]. Conflicts were resolved by discussion between BH and AP; if necessary, SL was involved. Results of the regression analyses are presented as odds ratios (OR) and associated 95% confidence intervals (CI).

Results

We identified 52 recent meta-analyses and 242 previous meta-analyses and systematic reviews (including 24 previous systematic reviews without meta-analysis), covering a wide range of drugs and medical specialties. Table 2 shows summary characteristics of included studies, whereas Additional files 5 and 6 provide detailed information on the individual meta-analyses and systematic reviews.

Table 2 Summary characteristic of included studies

Out of 52 recent meta-analyses there were only four without previously published meta-analyses or systematic reviews. These four articles were excluded from the regression analysis. For the remaining 48 articles there were, on average, five (range 1 to 28, SD 4.6) previous meta-analyses or systematic reviews per paper.

Out of 242 previous meta-analyses and systematic reviews, approximately two-thirds were cited (159 out of 242, 66%), one-third described (86 out of 242, 36%), and only one-fifth discussed in the recent meta-analyses (49 out of 242, 20%). This pattern of results did not change when the previous articles published before 2010 (that is, older than two years) were excluded (see Figure 1).

Figure 1
figure 1

Results of the primary analysis. Percentage of previous meta-analyses and systematic reviews that were cited, described, and discussed by the recent meta-analyses.

Citing a previous meta-analysis or systematic review by a recent meta-analysis was positively associated with publication of the previous article in a journal with a higher impact factor (OR, 1.49; 95% CI, 1.06 to 2.10) and more recent publication year (OR, 1.19; 95% CI, 1.03 to 1.37). Similar results were found for describing (higher impact factor: OR, 1.83; 95% CI, 1.27 to 2.62; more recent publication year: OR, 1.29; 95% CI, 1.08 to 1.55) as well as for discussing (higher impact factor: OR, 1.72; 95% CI, 1.16 to 2.55; more recent publication year: OR, 1.55; 95% CI, 1.17 to 2.06). Additionally, the probability of describing the previous article was inversely associated with the concordance of results (OR, 0.38; 95% CI, 0.17 to 0.88) and the probability of being discussed was increased for previous articles that employed meta-analytic methods (OR, 32.36; 95% CI, 2.00 to 522.85). The AMSTAR score of the recent meta-analysis as well as the similarity score were not significantly associated with any of the outcomes (see Table 3) and the nominal variables journal title and medical discipline were excluded from the regression analysis.

Table 3 Results of the regression analysis

The inter-rater reliability for the independent raters was found to be Kappa = 0.664 (P < 0.001; 95% CI, 0.607 to 0.721).

Discussion

We found that in recent meta-analyses on pharmacological interventions published in leading medical journals, the proportion citing, describing, or discussing previous meta-analyses and systematic reviews on the same topic was low. Specifically, we found that only two-thirds of previous meta-analyses and systematic reviews were cited, one-third described, and only one in five of the previous articles’ results was discussed in light of the recent meta-analysis’ findings.

For individual RCTs it has been pointed out that most new trials are not interpreted, planned, and designed in the context of existing systematic reviews and other relevant evidence [6,23]. Our findings suggest that this statement applies also to otherwise methodologically sound meta-analyses. A fundamental principle of meta-analyses and systematic reviews is that all relevant clinical trials should be considered. We believe that they should also outline previous meta-analyses and systematic reviews about the same topic. Understanding the existing literature is central to any new project. In case of a meta-analysis, not referring to the results of previous meta-analyses and systematic reviews is especially problematic because it is likely to lead to confusion and disinformation among clinicians, patients, and policymakers, which is exactly the opposite of what any effort aiming at synthesizing scientific findings should be.

One could argue that citing 65% of previous relevant meta-analyses and systematic reviews is not a bad result, but we believe that simply putting a reference to another review is not enough. Systematic reviewers should place their results in the context of previous reviews, that is, provide a meaningful comment, comparison, or explanation of existing differences.

Moreover, this neglect is an example of inadequate study planning, suggesting that many authors do not perform the necessary literature search before initiating their own project [7,24]. This might very well be one of the reasons behind unnecessary duplication of effort in health sciences [8]. As trenchantly expressed by Terry and colleagues, ‘The issue of knowing what research is currently being undertaken … is a black hole in the public health landscape’ [25]. Especially worrisome is the fact that the authors who refer to similar previous papers rarely justify why their own project was undertaken, given a similar work was recently performed. In our sample we found 10 recent meta-analyses referring to a very similar previous work (‘similarity score’ four out of four). Only six of them justified why the same analysis was performed again (most common reason being that the previous paper needs to be updated or that some discordant results require clarification). None of them mentioned rigorous replication as a reason, suggesting that an ‘efficient culture for replication of research’ [8] has yet to emerge in health sciences.

Predictors

We found that this neglect to refer to previously published systematic reviews and meta-analyses was predicted by a number of variables. According to our model, previous meta-analyses and systematic reviews were more likely to be cited, described, or discussed by a recent meta-analyses if recently published in a journal with a high impact factor, if results were different, and if meta-analytic methods were used.

Publication year

More-recent meta-analyses are simply more up to date and usually include more RCTs. However, this does not necessarily mean that an older meta-analysis should be neglected - depending, among other factors, on how many new studies have been published since, an older meta-analysis can still serve as a valuable source of information that should be included in the literature review. Importantly, when we excluded all the previous articles published before 2010, the general pattern of our main result did not change (see Figure 1), showing that neglecting to refer to and discuss previously published systematic reviews and meta-analyses persisted even for the most recent material.

Impact factor

Our results show that for an increase of one unit in impact factor the odds to get cited by a recent meta-analysis increased by 49%. Although criticized, impact factor is an important criterion for readers to assess the importance of scientific literature [26]. However, authors of systematic reviews should be especially careful not to miss important insights published outside high-impact-factor journals and select evidence on grounds of methodological validity rather than simply high visibility [27].

Different results

We hypothesize that omitting similar findings from previous papers may constitute an (un)conscious strategy performed by authors to artificially create ‘novelty value’ to win an advantage during the peer-review and publication process. This is because journals demand novel, ground-breaking results to qualify for acceptance [28] - revealing that another article, using similar methodology, has obtained the same results likely decreases the novelty of the submitted paper. Such acts distort readers’ understanding of the true landscape of the medical evidence.

Meta-analytic methods

Although it is generally acknowledged that meta-analysis can be an important and reliable source of information [29], we would like to emphasize that the methodology itself cannot be a synonym of scientific quality and authors should be aware of both strengths and weaknesses of this method [30].

Limitations

Our analysis has limitations. We decided to focus only on clinical journals with the highest impact factors, because they usually publish papers of high scientific quality [31]. Nevertheless, our sample may not be considered representative for all medical meta-analyses. Because we wanted to be systematic in our approach, we included New England Journal of Medicine, although it does not publish many systematic reviews. We also restricted ourselves to pharmacological interventions. Therefore, our results do not necessarily generalize to other forms of treatment or other journals although we do not see any obvious reasons why the situation there should differ. We only used PubMed to identify the previous articles, so we might have missed some relevant meta-analyses or systematic reviews about a given topic. However, because we always included all previous meta-analyses and systematic reviews cited by the recent article (that is, all previous systematic reviews and meta-analyses that were on the reference list of a given recent article), our results represent a rather conservative estimate of the proportion of the previous meta-analyses and systematic reviews that were cited, described, and discussed by the recent meta-analyses. Selection by a single reviewer and 25% double extraction was also a limitation of our study, but the level of agreement between reviewers was good according to the Kappa coefficient [22]. Moreover, this is not a review where exact accuracy is essential - our primary result is very robust and our conclusions would not change even if the number of discussed and described papers should increase by a factor of two. Finally, our detailed description of the results in the Additional files allows verification and replication (see Additional file 7 for a list of references to all included meta-analyses and systematic reviews).

As we are not aware of any other research that could have guided our selection of predictors, we chose them based on our own expertise. Because of that, some of the measures we used have not been previously validated (similarity score of the review question, concordance of results), but we made sure they were as simple as possible and well-operationalized (including a priori definitions wherever possible). Moreover, the similarity score was based on the PICO questions that are considered essential in defining which studies to include and exclude [14] and constitute a well-recognized procedure [32].

Conclusions and policy implications

Upcoming systematic reviews and meta-analyses should include an outline of previous systematic work about the same topic. Such an outline should be recommended by evidence-based medicine guidelines and officially implemented by the editorial policies. Currently, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement recommends that authors of systematic reviews explain in the introduction how their work adds to what is already known and explain whether is it a new review or an update (see item 3: Rationale) [33]. This is not sufficient. In this respect the Consolidated Standards of Reporting Trials (CONSORT) Statement seems more demanding, recommending that each new trial includes a reference to a systematic review of previous similar trials or a note of the absence of such trials (see item 2a: Scientific background and explanation of rationale) [34]. We feel that there is no reason why systematic reviews should not follow an analogous procedure. The Cochrane Collaboration already acknowledged this problem and includes an obligatory section ‘Agreements and disagreements with other studies or reviews’ in its software Review Manager [35].

To reduce unnecessary duplication of research effort and adequately determine whether there is a need to undertake a new project, all systematic reviews and meta-analyses should be prospectively registered [8] using international registries of protocols, like PROSPERO [36].

Limiting the failure to refer to what is already known would make systematic reviews and meta-analyses a more useful, transparent, and valuable source of information for clinicians, researchers, policymakers, and patients. This simple step towards clarity and informativeness would enhance evidence-based practice as well as reduce waste in research resources [6-8,10,37] and reduce human suffering [38].

Abbreviations

AMSTAR:

A measurement tool for the assessment of the methodological quality of systematic reviews

CI:

confidence interval

CONSORT:

Consolidated Standards of Reporting Trials

OR:

odds ratio

PICO:

participants, intervention, comparator, outcome

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PROSPERO:

international prospective register of systematic reviews

RCT:

randomized controlled trial

References

  1. Oxman AD, Guyatt GH. The science of reviewing research. Ann N Y Acad Sci. 1993;703:125–33.

    Article  CAS  PubMed  Google Scholar 

  2. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7:e1000326.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med. 2011;154:50–5.

    Article  PubMed  Google Scholar 

  4. Clarke M, Hopewell S, Chalmers I. Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. Lancet. 2010;376:20–1.

    Article  PubMed  Google Scholar 

  5. Clarke M, Hopewell S. Many reports of randomised trials still don’t begin or end with a systematic review of the relevant evidence. J Bahrain Med Soc. 2013;24:145–8.

    Google Scholar 

  6. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374:86–9.

    Article  PubMed  Google Scholar 

  7. Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, et al. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383:156–65.

    Article  PubMed  Google Scholar 

  8. Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–75.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Chang SM, Carey T, Kato EU, Guise J-M, Sanders GD. Identifying research needs for improving health care. Ann Intern Med. 2012;157:439–45.

    Article  PubMed  Google Scholar 

  10. Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383:267–76.

    Article  PubMed  Google Scholar 

  11. Greenberg SA. How citation distortions create unfounded authority: analysis of citation network. BMJ. 2009;339:b2680.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Centrum für Disease Management [http://www.cfdm.de/media/doc/Protocol%20Quoting%20Habits%20of%20Meta-analyses.doc].

  14. Schardt C, Adams MB, Owens T, Keitz S, Fontelo P. Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Med Inform Decis Mak. 2007;7:16.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Hempel S, Newberry SJ, Maher AR, Wang Z, Miles JN, Shanman R, et al. Probiotics for the prevention and treatment of antibiotic-associated diarrhea: a systematic review and meta-analysis. JAMA. 2012;307:1959–69.

    Article  CAS  PubMed  Google Scholar 

  16. D’Souza AL, Rajkumar C, Cooke J, Bulpitt CJ. Probiotics in prevention of antibiotic associated diarrhoea: meta-analysis. BMJ. 2002;324:1361.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Makani H, Bangalore S, Desouza KA, Shah A, Messerli FH. Efficacy and safety of dual blockade of the renin-angiotensin system: meta-analysis of randomised trials. BMJ. 2013;346:f360.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Kunz R, Friedrich C, Wolbers M, Mann JF. Meta-analysis: effect of monotherapy and combination therapy with inhibitors of the renin angiotensin system on proteinuria in renal disease. Ann Intern Med. 2008;148:30–48.

    Article  PubMed  Google Scholar 

  19. Fox BD, Kahn SR, Langleben D, Eisenberg MJ, Shimony A. Efficacy and safety of novel oral anticoagulants for treatment of acute venous thromboembolism: direct and adjusted indirect meta-analysis of randomised controlled trials. BMJ. 2012;345:e7498.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Loke YK, Kwok CS. Dabigatran and rivaroxaban for prevention of venous hromboembolism–systematic review and adjusted indirect comparison. J Clin Pharm Ther. 2011;36:111–24.

    Article  CAS  PubMed  Google Scholar 

  21. Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;15:10.

    Article  Google Scholar 

  22. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33:159–74.

    Article  CAS  PubMed  Google Scholar 

  23. Jones AP, Conroy E, Williamson PR, Clarke M, Gamble C. The use of systematic reviews in the planning, design and conduct of randomised trials: a retrospective cohort of NIHR HTA funded trials. BMC Med Res Methodol. 2013;13:50.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Cooper N, Jones D, Sutton A. The use of systematic reviews when designing studies. Clin Trials. 2005;2:260–4.

    Article  PubMed  Google Scholar 

  25. Terry RF, Salm JF, Nannei C, Dye C. Creating a global observatory for health R&D. Science. 2014;345:1302–4.

    Article  CAS  PubMed  Google Scholar 

  26. Editorial. The impact factor game. PLoS Med. 2006;3:e291.

    Article  Google Scholar 

  27. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:498–502.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Bertamini M, Munafo MR. Bite-size science and its undesired side effects. Perspect Psychol Sci. 2012;7:67–71.

    Article  PubMed  Google Scholar 

  29. Guyatt GH, Sackett DL, Sinclair JC, Hayward R, Cook DJ, Cook RJ. Users’ guides to the medical literature IX: a method for grading health care recommendations. JAMA. 1995;274:1800–4.

    Article  CAS  PubMed  Google Scholar 

  30. Garg AX, Hackam D, Tonelli M. Systematic review and meta-analysis: when one study is just not enough. Clin J Am Soc Nephrol. 2008;3:253–60.

    Article  PubMed  Google Scholar 

  31. Saha S, Saint S, Christakis DA. Impact factor: a valid measure of journal quality? J Med Libr Assoc. 2003;91:42–6.

    PubMed  PubMed Central  Google Scholar 

  32. Huang X, Lin J, Demner-Fushman D. Evaluation of PICO as a knowledge representation for clinical questions. AMIA Annu Symp Proc. 2006;2006:359–63.

    PubMed Central  Google Scholar 

  33. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010;340:c869.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Centre TNC. Review Manager (RevMan). 52nd ed. The Cochrane Collaboration: Copenhagen; 2012.

    Google Scholar 

  36. Booth A, Clarke M, Dooley G, Ghersi D, Moher D, Petticrew M, et al. The nuts and bolts of PROSPERO: an international prospective register of systematic reviews. Syst Rev. 2012;1:2.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Siontis KC, Hernandez-Boussard T, Ioannidis JP. Overlapping meta-analyses on the same topic: survey of published studies. BMJ. 2013;347:f4501.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Chalmers I. The Lethal Consequences of Failing to Make Use of All Relevant Evidence about the Effects of Medical Treatments: The Need for Systematic Reviews. In: Rothwell P, editor. Treating Individuals: from Randomised Trials to Personalised Medicine. London: Lancet; 2007. p. 37–58.

    Google Scholar 

Download references

Acknowledgements

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. AC acknowledges support from the NIHR Oxford cognitive health Clinical Research Facility. JRG is an NIHR Senior Investigator.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bartosz Helfer.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

BH and SL together with JRG, AC and JMD conceived and designed the study. BH identified the eligible meta-analyses and, together with AP, extracted the data. GS, DM, SL, MTS, and BH performed the statistical analyses and interpreted the data. BH wrote the manuscript, and all authors revised it critically for content and approved the final version.

Authors’ information

BH is a researcher in evidence-based medicine at the Department of Psychiatry and Psychotherapy, Technical University Munich, Germany. AP is a research analyst at the Centre for Addiction and Mental Health, Toronto, Canada. MTS is a resident in psychiatry and researcher in evidence-based psychiatry at the Department of Psychiatry and Psychotherapy, Technical University Munich, Germany. JGR is Head of Department of Psychiatry; Professor of Epidemiological Psychiatry at the University of Oxford, UK; and Director of the Oxford Clinical Trials Unit for Mental Illness. AC is an Associate Professor and Senior Clinical Researcher at the Department of Psychiatry at the University of Oxford, UK; Editor in Chief of Evidence-Based Mental Health; and Editor of the Cochrane Depression, Anxiety and Neurosis Group. JMD is Gilman Professor of Psychiatry and Research Professor of Medicine at University of Illinois at Chicago, USA; and Editor of the Cochrane Schizophrenia Group. DM is a lecturer in Statistics at the Department of Primary Education, University of Ioannina, Greece. GS is Assistant Professor in Epidemiology at the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Greece; and the convener of the Cochrane Collaboration’s Statistical Method Group and the Comparing Multiple Interventions Methods Group. SL is a Professor and Vice-chairman of the Department of Psychiatry and Psychotherapy Technical University Munich, Germany; Honorary Professor of Evidence-based Psychopharmacological Treatment at University of Aarhus, Denmark; and Editor to the Cochrane Schizophrenia Group.

Additional files

Additional file 1:

PRISMA flow diagram for recent meta-analyses.

Additional file 2:

PRISMA flow diagrams for previous systematic reviews and meta-analyses.

Additional file 3:

Similarity score of the review question between recent and previous review: definitions and examples.

Additional file 4:

Concordance of results: example of similar and different results.

Additional file 5:

Summary characteristics of recent and previous studies.

Additional file 6:

Detailed characteristics of recent meta-analyses.

Additional file 7:

References to all included meta-analyses and systematic reviews.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Helfer, B., Prosser, A., Samara, M.T. et al. Recent meta-analyses neglect previous systematic reviews and meta-analyses about the same topic: a systematic examination. BMC Med 13, 82 (2015). https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-015-0317-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12916-015-0317-4

Keywords