Skip to main content

Partially systematic thoughts on the history of systematic reviews

Abstract

Six years after the launch of Systematic Reviews by Biomed Central, this article is part of the celebration of the journal. It contains personal reflections on the past, present and future of systematic reviews, using examples relevant to the role of systematic reviews in cataloguing and analysing research, assessing quality and planning new studies. The focus is on the most common of the various types of systematic review in health and social care, namely those assessing the effects of interventions.

Peer Review reports

Background

Shortly before the start of the twentieth century, George Gould presented a vision to the inaugural meeting of the Association of Medical Librarians in Philadelphia on May 2, 1898: “I look forward to such an organisation of the literary records of medicine that a puzzled worker in any part of the civilized world shall in an hour be able to gain a knowledge pertaining to a subject of the experience of every other man in the world” [1].

Now, 120 years on, 114 years since Karl Pearson published a pooled analysis of the results of a series of studies to investigate the effects of a typhoid vaccine [2], 42 years since Gene Glass coined the term “meta-analysis” [3], 25 years since the Cochrane Collaboration (now, simply, Cochrane) was established, and 6 years since the journal Systematic Reviews began, I am pleased to be part of the journal’s celebration and to share my thoughts on the history of this type of research, which was recently described as entering a “midlife crisis” [4]. It is a history that the journal has helped to document, publishing research articles for many aspects of the methods for systematic reviews, which are likely to feature in historical accounts now and far into the future.

It is, of course, strange to be writing about the history of something that is living, growing and changing, and of which I feel very much part. However, finding myself increasingly using the phrase “years ago” when talking about projects I have been involved in, some of what has happened in not just the “distant” past is feeling historical, as I reflected in an essay for the James Lind Library [5].

Many people have said that history belongs to the victors and to some extent that is the case when we look back at what have become known as “systematic reviews” over the last century and more. But, I suggest that there is also an important role for the persistent, for those who have stuck at the task despite the challenges. For people involved in systematic reviews, recent decades have seen some of these challenges be overcome. Many tasks are now much easier for people preparing and using systematic reviews than even 10 years ago because of the persistence of individuals and organisations who have raised the profile, importance and value of systematic reviews. Initiatives that are now evolving and developing have come from that persistence and are likely to be seen as pivotal to those writing historical pieces in years and decades to come. I look at some of these areas in this article, helped by other accounts of the history of evidence synthesis [4,5,6,7,8].

I have not attempted to conduct my own “systematic review” of the history. Instead, I illustrate this history with examples relevant to the past, present and future, anchored around some of the key reasons for doing systematic reviews today. My main attention is on the role of systematic reviews in cataloguing and analysing research, but I also touch on their importance for assessing quality and planning new studies. I focus on the commonest type of systematic review in health and social care, those that assess the effects of interventions. Although many of the principles for doing these reviews have remained constant over time, their methods are evolving and a historian even just a few years from now may be looking back on a different landscape. They may be reflecting on even greater growth in the number of reviews that have been conducted as “rapid reviews” become more common as a means to quickly meet the specific needs of decision makers who do not wish to wait for a full systematic review. Rapid reviews speed up the processes for searching, appraisal, extraction and gathering of data extraction, and the analysis and synthesis of the findings of the included studies [9, 10]. And, in a further development, the concept of “living systematic reviews” has been introduced to refer to reviews where the updating process is almost continuous, adding new studies as soon as possible after they become available [11]. These accelerations may lead to bias, and analyses of their advantages and disadvantages compared to full reviews are likely to either confirm that things can be done quicker without compromising quality or instil caution in those users who feel that they cannot wait for a full review.

In recognition of how I am writing about a history that is itself “living”, and how I expect I have failed to refer to some key people, papers and developments, I welcome comment, feedback and debate on how my reflections match those of others who were, are and will be part of the past, present and future of systematic reviews.

Cataloguing

One of the challenges facing anyone wishing to use the enormous amount of research that has been conducted in health and social care, and the dozens or, sometimes even, hundreds of individual studies addressing the same issue is finding this material. In 1994, Cindy Mulrow estimated that more than two million articles were being published in biomedicine every year and drew a mental picture of a tower 500 m high if all the journals were piled on top of each other [12]. Since then, the arrival and proliferation of online-only journals means that we might no longer share the concept of piles of print journals towering into the sky or occupying kilometres of library shelves, and have moved on to thinking about the number of terabytes of storage needed to cope with all of these articles. However, the growth in the number of articles has accelerated and many more than two million articles are now published every year. Thinking only of research that evaluates the effects of health and social care interventions, tens of thousands of controlled trials are published annually and more than 120,000 trials are currently open to recruitment according to a search of the World Health Organization’s International Clinical Trials Registry Platform [13] in August 2018 (apps.who.int/trialsearch).

Systematic reviews help users to find their way through this morass to the studies that they are most interested in, and global efforts to facilitate systematic reviews have greatly facilitated access to the individual studies over the last few decades. In the 1970s, Archie Cochrane wrote “It is surely a great criticism of our profession that we have not organised a critical summary, by speciality or subspeciality, adapted periodically, of all relevant randomised controlled trials” [14]. And, when setting out one of the challenges to be overcome by The Cochrane Collaboration in a British Medical Journal editorial in 1992, Iain Chalmers, Kay Dickersin and Tom Chalmers wrote “failing to conduct systematic, up to date reviews of controlled trials of health care may result in substantial adverse consequences for patients, practitioners, the health services, researchers, and research funding bodies” [15].

Of course, individual examples of such catalogues of critical summaries of research exist from well before the late twentieth century, but these were focused on specific topics. Most notably perhaps, James Lind’s treatise on scurvy in the eighteenth century included not only his own experiments on ways to prevent this disease but also his “critical and chronological view of what has been published on the subject” [16].

Systematic reviews today provide a valuable resource as a catalogue of the studies addressing the specific question for the review, but even without the reviews themselves, much has been done to bring those studies together. When The Cochrane Collaboration was established in 1993, approximately 20,000 reports of randomised trials could be easily found in the bibliographic database, MEDLINE. Through improved indexing and initiatives such as the MEDLINE retagging projects of the US and UK Cochrane Centres [17], this has now increased to more than 460,000 PubMed records tagged with the publication type for “randomized controlled trial” in August 2018. Nearly 60,000 of these were published before 1992 and can now be found with this simple search. The Cochrane Central Register of Controlled Trials itself now contains more than one million records, and prospective registries of trials which were called for in 1986 by John Simes [18] allow users to find both ongoing studies and many more that have closed. However, the findings of many of these studies have not been published [19], highlighting how this problem of selective reporting, called “scientific misconduct” by Iain Chalmers in 1990 [20], is ongoing.

Systematic reviews themselves are now in need of similar cataloguing exercises. Reviews are more readily retrievable in bibliographic databases than they were a few decades ago, but are increasing rapidly in number. Estimates for published reviews have risen from a total of about 3000 in MEDLINE for the whole of the period from 1980 to 2000 [21] to approximately 2500 per year in 2007 [22], through 4000 for 2010 [23] and towards 8000 for 2014 [24]. An August 2018 search of PubMed for records tagged with the publication type meta-analysis retrieved more than 11,000 records for publications from 2017 in that database alone, and Jessica Gurevitch et al. have estimated that the total available in the literature is already beyond 200,000 [4].

There are also collections of systematic reviews produced to a common standard, with an early exemplar being the dozens of systematic reviews of controlled trials relevant to maternity care brought together in the 1980s in Effective Care in Pregnancy and Childbirth [25], which evolved into The Cochrane Collaboration Pregnancy and Childbirth Database [7]. Today, we have the output produced by organisations such as Cochrane (www.CochraneLibrary.com), Campbell (www.campbellcollaboration.org/library.html) and the Joanna Briggs Institute JBI (lww.com/jbisrir/Pages/default.aspx).

There are also aggregators of systematic reviews, including the now archived Database of Abstracts of Reviews of Effects (DARE) for the effects of healthcare interventions generally (www.crd.york.ac.uk/CRDWeb), and smaller collections such as that produced by Evidence Aid to improve access to reviews relevant to the humanitarian sector (www.EvidenceAid.org) [26]. There is even a dedicated prospective register for systematic reviews, PROSPERO. In fact, the second article in Systematic Reviews in February 2012 presented the “nuts and bolts” of this new register [27]. A total of 200 reviews were registered in PROSPERO’s first 8 months, and it was warmly welcomed by Sally Davies, Director of the UK’s National Institute for Health Research [28]. By August 2018, nearly 40,000 ongoing reviews had been registered, growing by more than 10,000 per year, and methodology research is beginning to appear which uses these records [29, 30]. Finally, individual reviews themselves are now being subject to combination in overviews bringing together the findings of multiple reviews [31] and greater automation in the review process looks set to accelerate further both the number of reviews and the speed with which they are done [32,33,34].

Analysing

In 1885, Lord Rayleigh told the meeting of the British Association for the Advancement of Science: “If, as is sometimes supposed, science consisted in nothing but the laborious accumulation of facts, it would soon come to a standstill, crushed, as it were, under its own weight. The suggestion of a new idea, or the detection of a law, supersedes much that has previously been a burden on the memory, and by introducing order and coherence facilitates the retention of the remainder in an available form. Two processes are thus at work side by side, the reception of new material and the digestion and assimilation of the old. One remark, however, should be made. The work which deserves, but I am afraid does not always receive, the most credit is that in which discovery and explanation go hand in hand, in which not only are new facts presented, but their relation to old ones is pointed out” [35].

When this combination of the old and the new is achieved using quantitative methods, we enter the realm of meta-analysis, the statistical combination of the results of related studies. In their recent account of the history of meta-analysis, which has a particular focus on its use over the last three decades in ecology, evolutionary biology and conservation, Gurevitch et al. described two different goals for those doing meta-analyses: a specific one of assessing the evidence for the effects of specific interventions or exposures for a particular problem or a more comprehensive one that seeks broad generalisations across large numbers of study outcomes [4].

The term meta-analysis was coined by Gene Glass in the mid-1970s and used in his American Educational Research Association presidential address in April 1976, describing it as “the analysis of analyses” [3]. The following year, with Mary Lee Smith, Glass published one of the largest meta-analyses, using data from 375 studies of psychotherapy and counselling, with a total of more than 25,000 people [36], providing an example of the type of comprehensive approach described by Gurevitch et al. [4]. Around the same time, pivotal papers on the statistical methods for combining the results of studies to assess the more specific effects they describe, were being published [37, 38]. Since then, as the statistical methods have developed, so have the means for displaying the results, with the introduction of the forest plot in the 1980s [39,40,41].

Looking to the future, some of the more advanced statistical techniques seem set to become more common. These include methods such as the re-analysis of the individual participant data from each included study [42, 43], which was used as far back as 1970 [44] and is likely to be boosted by greater access to the data from trials [45, 46], while the more recent introduction of the mixed treatment comparison or network meta-analyses approach [47] has seen a large rise in the number of published systematic reviews using this technique, with several hundred now available [48,49,50,51].

Assessing quality and designing new studies

In 1753, Lind introduced his cataloguing of studies of scurvy and the need for their appraisal with “As it is no easy matter to root out prejudices,….it became requisite to exhibit a full and impartial view of what had hitherto been published on the scurvy … Indeed, before the subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish” [16]. Unfortunately, to this day, systematic reviews continue to face the problem that much of the existing research is “rubbish” or non-existent. A large proportion of Cochrane Reviews conclude that there is insufficient reliable evidence to determine whether there are important differences in the effects of the interventions they seek to compare.

The last couple of decades have seen historical developments in the assessment of the quality of existing research, leading to greater transparency in the process but also greater concerns about the amount of research waste due to poor-quality studies [52]. The Cochrane Risk of Bias tools for randomised [53] and non-randomised trials [54] allow reviewers to show their users how they reached their decisions, while the work of GRADE and GRADE-CERQual provides the means to present both the quality of the evidence brought together in the review and the strength of its recommendations [55,56,57]. We are also equipped to assess the quality of systematic reviews, developing from the original Oxman and Guyatt checklist [58] to two incarnations of AMSTAR [59, 60] for systematic reviews generally and checklists for specific types of review such as that from ISPOR for network meta-analyses [61]. There is also the QUOROM [62], now PRISMA guidance for reporting reviews [63], with extensions for reviews using individual participant data [64] and network meta-analysis [65]. However, application of these tools reveals that much work needs to be done to improve the quality of many systematic reviews [66, 67].

Given how many systematic reviews reach conclusions that reflect the inability of the existing evidence base to answer their research question and highlight the need for ongoing uncertainty about the relative effects of the interventions they investigated, one of the important benefits for systematic reviews is in providing the justification for the new studies that will fill this gap. Some of the early examples of systematic reviews recognised this. In 1976, Shaikh et al. reviewed 29 studies of tonsillectomy and adenoidectomy in children that had been published over the preceding 50 years and concluded by calling for a new, high-quality randomised trial that would ensure that “the methodologic pitfalls annotated in our review be guarded against” [68]. Reading the implications for research in more recent systematic reviews, one would see many similar calls [69]. Drawing on the past while looking to the future, these new studies need to be designed in the light of the existing evidence [70], report their findings in the context of the updated evidence [71, 72] and use methods that will minimise research waste [52, 73]. In doing so, these new studies will feed into future systematic reviews, and their conduct and that of the future systematic reviews will also be able to draw on another of the recent historical developments in systematic reviews: the systematic review of methodology [74]. The aforementioned rapid review process may also be of particular relevance here, as a means to quickly identify research gaps and uncertainties. “Rapid Research Needs Assessments” can highlight those areas most in need of a new study and the UK’s Public Health Rapid Support Team for disease outbreaks plans to work with Evidence Aid [26] to conduct such assessments in the early stages of a humanitarian emergency associated with a disease outbreak in order to identify important uncertainties that could be tackled by initiating new research.

Conclusions

Returning to the nineteenth century quote from George Gould, with which I began this article, we now live in a world where technology means that people everywhere should be able to retrieve the knowledge they need about the effects of health and social care interventions in minutes, as long as barriers are not put in their way to access this [75]. If this knowledge is to be reliable and to have minimised bias, it will need to have been accumulated in systematic reviews. The last century and more, but particularly recent decades, have seen substantial developments to assist with this. Some of these can already be recognised as “historical”, others will become so in the years to come, and others will fade away. However, the ongoing persistence of those wishing to provide the evidence to help people making decisions and choices about their own health and social care, and that of others, will, I hope, ensure that systematic reviews will continue to have a future, as well as a past.

References

  1. Gould GM. The work of an association of medical librarians. J Med Libr Assoc. 1898;1:15–9.

    CAS  Google Scholar 

  2. Pearson K. Report on certain enteric fever inoculation statistics. Br Med J. 1904;3:1243–6.

    Google Scholar 

  3. Glass GV. Primary, secondary and meta-analysis of research. Educ Res. 1976;10:3–8.

    Article  Google Scholar 

  4. Gurevitch J, Koricheva J, Nakagawa S, Stewart G. Meta-analysis and the science of research synthesis. Nature. 2018;555:175–82.

    Article  CAS  Google Scholar 

  5. Clarke M. History of evidence synthesis to assess treatment effects: personal reflections on something that is very much alive. JLL bulletin: commentaries on the history of treatment evaluation. J R Soc Med. 2016;109:154–63.

    Article  Google Scholar 

  6. Chalmers I, Hedges LV, Cooper H. A brief history of research synthesis. Eval Health Prof. 2002;25:12–37.

    Article  Google Scholar 

  7. Starr M, Chalmers I, Clarke M, Oxman AD. The origins, evolution, and future of the Cochrane database of systematic reviews. Int J Technol Assess Health Care. 2009;25(suppl 1):182–95.

    Article  Google Scholar 

  8. Clarke M, Chalmers I. Reflections on the history of systematic reviews. BMJ Evid Based Med. 2018;23(4):121–2.

    Article  Google Scholar 

  9. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.

    Article  Google Scholar 

  10. Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, Perrier L, Hutton B, Moher D, Straus SE. A scoping review of rapid review methods. BMC Med. 2015;13:224.

    Article  Google Scholar 

  11. Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald S, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I. Thomas J; living systematic review network. Living systematic review: 1. Introduction-the why, what, when, and how. J Clin Epidemiol. 2017;91:23–30.

    Article  Google Scholar 

  12. Mulrow CD. Rationale for systematic reviews. BMJ. 1994;309:597–9.

    Article  CAS  Google Scholar 

  13. Ghersi D, Pang T. From Mexico to Mali: four years in the history of clinical trial registration. J Evid Based Med. 2009;2(1):1–7.

    Article  Google Scholar 

  14. Cochrane AL. 1931-1971: a critical review, with particular reference to the medical profession. In: Medicines for the year 2000. London: Office of Health Economics; 1979. p. 1–11.

    Google Scholar 

  15. Chalmers I, Dickersin K, Chalmers TC. Getting to grips with Archie Cochrane’s agenda. BMJ. 1992;305:786–8.

    Article  CAS  Google Scholar 

  16. Lind J. A treatise of the scurvy. In three parts. Containing an inquiry into the nature, causes and cure, of that disease. Together with a critical and chronological view of what has been published on the subject. Printed by Sands, Murray and Cochran for A Kincaid and A Donaldson: Edinburgh; 1753.

    Google Scholar 

  17. Dickersin K, Manheimer E, Wieland S, Robinson KA, Lefebvre C, McDonald S, Central Development Group. Development of the Cochrane Collaboration’s central register of controlled clinical trials. Eval Health Prof. 2002;25:38–64.

    PubMed  Google Scholar 

  18. Simes RJ. Publication bias: the case for an international registry of clinical trials. J Clin Oncol. 1986;4(10):1529–41.

    Article  CAS  Google Scholar 

  19. Powell-Smith A, Goldacre B. The TrialsTracker: automated ongoing monitoring of failure to share clinical trial results by all major companies and research institutions. F1000Research. 2629;2016:5.

    Google Scholar 

  20. Chalmers I. Underreporting research is scientific misconduct. JAMA. 1990;263:1405–8.

    Article  CAS  Google Scholar 

  21. Lee WL, Bausell RB, Berman BM. The growth of health-related meta-analyses published from 1980 to 2000. Eval Health Prof. 2001;24:327–35.

    Article  CAS  Google Scholar 

  22. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4:e78.

    Article  Google Scholar 

  23. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7:e1000326.

    Article  Google Scholar 

  24. Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, Catalá-López F, Li L, Reid EK, Sarkis-Onofre R, Moher D. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016;13(5):e1002028.

    Article  Google Scholar 

  25. Chalmers I, Enkin M, Keirse MJNC. Effective care in pregnancy and childbirth. Oxford: Oxford University Press; 1989.

    Google Scholar 

  26. Allen C. A resource for those preparing for and responding to natural disasters, humanitarian crises, and major healthcare emergencies. J Evid Based Med. 2014;7:234–7.

    Article  Google Scholar 

  27. Booth A, Clarke M, Dooley G, Ghersi D, Moher D, Petticrew M, Stewart L. The nuts and bolts of PROSPERO: an international prospective register of systematic reviews. Syst Rev. 2012;1:2.

    Article  Google Scholar 

  28. Davies S. The importance of PROSPERO to the National Institute for Health Research. Syst Rev. 2012;1:5.

    Article  Google Scholar 

  29. Page MJ, Shamseer L, Tricco AC. Registration of systematic reviews in PROSPERO: 30,000 records and counting. Syst Rev. 2018;7:32.

    Article  Google Scholar 

  30. Ruano J, Gómez-García F, Gay-Mimbrera J, Aguilar-Luque M, Fernández-Rueda JL, Fernández-Chaichio J, Alcalde-Mellado P, Carmona-Fernandez PJ, Sanz-Cabanillas JL, Viguera-Guerra I, Franco-García F, Cárdenas-Aranzana M, Romero JLH, Gonzalez-Padilla M, Isla-Tejera B, Garcia-Nieto AV. Evaluating characteristics of PROSPERO records as predictors of eventual publication of non-Cochrane systematic reviews: a meta-epidemiological study protocol. Syst Rev. 2018;7:43.

    Article  Google Scholar 

  31. Hunt H, Pollock A, Campbell P, Estcourt L, Brunton G. An introduction to overviews of reviews: planning a relevant research question and objective for an overview. Syst Rev. 2018;7:39.

    Article  Google Scholar 

  32. Adams CE, Polzmacher S, Wolff A. Systematic reviews: work that needs to be done and not to be done. J Evid Based Med. 2013;6:232–5.

    Article  Google Scholar 

  33. Turner T, Green S, Tovey D, McDonald S, Soares-Weiser K, Pestridge C, Elliott J. On behalf of the project transform team and IKMD developers. Producing Cochrane systematic reviews—a qualitative study of current approaches and opportunities for innovation and improvement. Syst Rev. 2017;6:147.

    Article  Google Scholar 

  34. Gates A, Johnson C, Hartling L. Technology-assisted title and abstract screening for systematic reviews: a retrospective evaluation of the Abstrackr machine learning tool. Syst Rev. 2018;7:45.

    Article  Google Scholar 

  35. Rayleigh L. Address by the Rt. Hon. Lord Rayleigh. In: Report of the fifty-fourth meeting of the British Association for the Advancement of science; august and September; Montreal, Canada. London: John Murray; 1885.

    Google Scholar 

  36. Smith ML, Glass GV. Meta-analysis of psychotherapy outcome studies. Am Psychol. 1977;32:752–60.

    Article  CAS  Google Scholar 

  37. Peto R, Pike MC, Armitage P, Breslow NE, Cox DR, Howard SV, Mantel N, McPherson K, Peto J, Smith PG. Design and analysis of randomized clinical trials requiring prolonged observation of each patient. II. Analysis and examples. Br J Cancer. 1977;35(1):1–39.

    Article  CAS  Google Scholar 

  38. O'Rourke K. An historical perspective on meta-analysis: dealing quantitatively with varying study results. J R Soc Med. 2007;100:579–82.

    Article  Google Scholar 

  39. Lewis S, Clarke M. Forest plots: trying to see the wood and the trees. BMJ. 2001;322:1479–80.

    Article  CAS  Google Scholar 

  40. Lewis JA. Beta-blockade after myocardial infarction- a statistical view. Br J Clin Pharmacol. 1982;14:15S–21S.

    Article  Google Scholar 

  41. Antiplatelet Trialists’ Collaboration. Secondary prevention of vascular disease by prolonged anti-platelet treatment. BMJ. 1988;296:320–31.

    Article  Google Scholar 

  42. Clarke M, Stewart L, Pignon J-P, Bijnens L. Individual patient data meta-analyses in cancer. Br J Cancer. 1998;77:2036–44.

    Article  CAS  Google Scholar 

  43. Simmonds M, Stewart G, Stewart LA. Decade of individual participant data meta-analyses: A review of current practice. Contemp Clin Trials. 2015;45(Pt A):76–83.

    Article  Google Scholar 

  44. International Anticoagulant Review Group. Collaborative analysis of long-term anti-coagulant administration after acute myocardial infarction. Lancet. 1970;1:203–9.

    Google Scholar 

  45. Varnai P, Rentel MC, Simmonds P, Sharp TA, Mostert B, de Jongh T. Assessing the research potential of access to clinical trial data. In: A report to the Wellcome Trust; 2014.

    Google Scholar 

  46. Goldacre B, Lane S, Mahtani KR, Heneghan C, Onakpoya I, Bushfield I, Smeeth L. Pharmaceutical companies' policies on access to trial data, results, and methods: audit study. BMJ. 2017;358:j3334.

    Article  Google Scholar 

  47. Mbuagbaw L, Rochwerg B, Jaeschke R, Heels-Andsell D, Alhazzani W, Thabane L, Guyatt GH. Approaches to interpreting and choosing the best treatments in network meta-analyses. Syst Rev. 2017;6:79.

    Article  CAS  Google Scholar 

  48. Edwards SJ, Clarke MJ, Wordsworth S, Borrill J. Indirect comparisons of treatments based on systematic reviews of randomised controlled trials. Int J Clin Pract. 2009;63:841–54.

    Article  CAS  Google Scholar 

  49. Lee AW. Review of mixed treatment comparisons in published systematic reviews shows marked increase since 2009. J Clin Epidemiol. 2014;67:138–43.

    Article  Google Scholar 

  50. Petropoulou M, Nikolakopoulou A, Veroniki AA, Rios P, Vafaei A, Zarin W, et al. Bibliographic study showed improving statistical methodology of network meta-analyses published between 1999 and 2015. J Clin Epidemiol. 2017;82:20–8.

    Article  Google Scholar 

  51. Zarin W, Veroniki AA, Nincic V, Vafaei A, Reynen E, Motiwala SS, et al. Characteristics and knowledge synthesis approach for 456 network meta-analyses: a scoping review. BMC Med. 2017;15:3.

    Article  Google Scholar 

  52. Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–75.

    Article  Google Scholar 

  53. Higgins JP, Altman DG, Gøtzsche PC, Jüni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. Cochrane Bias methods group; Cochrane Statistical Methods Group The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

    Article  Google Scholar 

  54. Sterne JA, Hernán MA, Reeves BC, Savovic J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JP. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919.

    Article  Google Scholar 

  55. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ, GRADE Working Group. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336:924–6.

    Article  Google Scholar 

  56. Goldet G, Howick J. Understanding GRADE: an introduction. J Evid Based Med. 2013;6:50–4.

    Article  Google Scholar 

  57. Lewin S, Glenton C, Munthe-Kaas H, Carlsen B, Colvin CJ, Gülmezoglu M, Noyes J, Booth A, Garside R, Rashidian A. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895.

    Article  Google Scholar 

  58. Oxman AD. Checklists for review articles. BMJ. 1994;309:648–51.

    Article  CAS  Google Scholar 

  59. Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.

    Article  Google Scholar 

  60. Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ. 2017;358:j4008.

    Article  Google Scholar 

  61. Hoaglin DC, Hawkins N, Jansen JP, Scott DA, Itzler R, Cappelleri JC, Boersma C, Thompson D, Larholt KM, Diaz M, Barrett A. Conducting indirect-treatment-comparison and network-meta-analysis studies: report of the ISPOR task force on indirect treatment comparisons good research practices: part 2. Value Health. 2011;14(4):429–37.

    Article  Google Scholar 

  62. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999;354:1896–900.

    Article  CAS  Google Scholar 

  63. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6:e1000100.

    Article  Google Scholar 

  64. Stewart LA, Clarke M, Rovers M, Riley RD, Simmonds M, Stewart G, Tierney JF, PRISMA-IPD Development Group. Preferred reporting items for systematic review and meta-analyses of individual participant data: the PRISMA-IPD statement. JAMA. 2015;313:1657–65.

    Article  Google Scholar 

  65. Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, Ioannidis JP, Straus S, Thorlund K, Jansen JP, Mulrow C, Catalá-López F, Gøtzsche PC, Dickersin K, Boutron I, Altman DG, Moher D. The PRISMA extension statement for reporting of systematic reviews incorporating network meta-analyses of health care interventions: checklist and explanations. Ann Intern Med. 2015;162:777–84.

    Article  Google Scholar 

  66. Pussegoda K, Turner L, Garritty C, Mayhew A, Skidmore B, Stevens A, Boutron I, Sarkis-Onofre R, Bjerre LM, Hróbjartsson A, Altman DG, Moher D. Systematic review adherence to methodological or reporting quality. Syst Rev. 2017;6:131.

    Article  Google Scholar 

  67. Ioannidis JPA. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94:485–514.

    Article  Google Scholar 

  68. Shaikh W, Vayda E, Feldman W. A systematic review of the literature on evaluative studies on tonsillectomy and adenoidectomy. Pediatrics. 1976;57:401–7.

    CAS  PubMed  Google Scholar 

  69. Clarke L, Clarke M, Clarke T. How useful are Cochrane reviews in identifying research needs? J Health Serv Res Policy. 2007;12:101–3.

    Article  Google Scholar 

  70. Clarke M. Doing new research? Don’t forget the old: nobody should do a trial without reviewing what is known. PLoS Med. 2004;1:100–2.

    Article  Google Scholar 

  71. Clarke M, Hopewell S. Many reports of randomised trials still don’t begin or end with a systematic review of the relevant evidence. J Bahrain Med Soc. 2013;24:145–8.

    Google Scholar 

  72. Robinson KA, Goodman SN. A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med. 2011;154:50–5.

    Article  Google Scholar 

  73. Treweek S, Altman DG, Bower P, Campbell M, Chalmers I, Cotton S, Craig P, Crosby D, Davidson P, Devane D, Duley L, Dunn J, Elbourne D, Farrell B, Gamble C, Gillies K, Hood K, Lang T, Littleford R, Loudon K, McDonald A, McPherson G, Nelson A, Norrie J, Ramsay C, Sandercock P, Shanahan DR, Summerskill W, Sydes M, Williamson P, Clarke M. Making randomised trials more efficient: report of the first meeting to discuss the trial forge platform. Trials. 2015;16:261.

    Article  Google Scholar 

  74. McKenzie JE, Clarke MJ, Chandler J. Why do we need evidence-based methods in Cochrane? Cochrane Database Syst Rev. 2015;7:ED000102.

    Google Scholar 

  75. Antes G, Clarke M. Knowledge as a key resource for health challenges. Lancet. 2012;379:195–6.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

The author wrote, read and approved the final manuscript.

Corresponding author

Correspondence to Mike Clarke.

Ethics declarations

Competing interests

I have spent most of my research career working in topics related to health and social care, for which systematic reviews play an important role. My employment and income depends in part on the value placed on systematic reviews.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Clarke, M. Partially systematic thoughts on the history of systematic reviews. Syst Rev 7, 176 (2018). https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-018-0833-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-018-0833-3