Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Scientific Value of Systematic Reviews: Survey of Editors of Core Clinical Journals

  • Joerg J. Meerpohl ,

    Contributed equally to this work with: Joerg J. Meerpohl, Florian Herrle

    meerpohl@cochrane.de

    Affiliations German Cochrane Center, Institute of Medical Biometry and Medical Informatics, University Medical Center Freiburg, Freiburg, Germany, Pediatric Hematology and Oncology, Center for Pediatrics and Adolescent Medicine, University Medical Center Freiburg, Freiburg, Germany

  • Florian Herrle ,

    Contributed equally to this work with: Joerg J. Meerpohl, Florian Herrle

    Affiliations German Cochrane Center, Institute of Medical Biometry and Medical Informatics, University Medical Center Freiburg, Freiburg, Germany, Department of Surgery, University Medical Center Mannheim, University of Heidelberg, Mannheim, Germany

  • Gerd Antes,

    Affiliation German Cochrane Center, Institute of Medical Biometry and Medical Informatics, University Medical Center Freiburg, Freiburg, Germany

  • Erik von Elm

    Affiliations German Cochrane Center, Institute of Medical Biometry and Medical Informatics, University Medical Center Freiburg, Freiburg, Germany, Cochrane Switzerland, IUMSP, University Hospital Lausanne, Lausanne, Switzerland

Correction

4 Oct 2012: Meerpohl JJ, Herrle F, Reinders S, Antes G, von Elm E (2012) Correction: Scientific Value of Systematic Reviews: Survey of Editors of Core Clinical Journals. PLOS ONE 7(10): 10.1371/annotation/b9a9cb87-3d96-47e4-a073-a7e97a19f47c. https://doi.org/10.1371/annotation/b9a9cb87-3d96-47e4-a073-a7e97a19f47c View correction

Abstract

Background

Synthesizing research evidence using systematic and rigorous methods has become a key feature of evidence-based medicine and knowledge translation. Systematic reviews (SRs) may or may not include a meta-analysis depending on the suitability of available data. They are often being criticised as ‘secondary research’ and denied the status of original research. Scientific journals play an important role in the publication process. How they appraise a given type of research influences the status of that research in the scientific community. We investigated the attitudes of editors of core clinical journals towards SRs and their value for publication.

Methods

We identified the 118 journals labelled as “core clinical journals” by the National Library of Medicine, USA in April 2009. The journals’ editors were surveyed by email in 2009 and asked whether they considered SRs as original research projects; whether they published SRs; and for which section of the journal they would consider a SR manuscript.

Results

The editors of 65 journals (55%) responded. Most respondents considered SRs to be original research (71%) and almost all journals (93%) published SRs. Several editors regarded the use of Cochrane methodology or a meta-analysis as quality criteria; for some respondents these criteria were premises for the consideration of SRs as original research. Journals placed SRs in various sections such as “Review” or “Feature article”. Characterization of non-responding journals showed that about two thirds do publish systematic reviews.

Discussion

Currently, the editors of most core clinical journals consider SRs original research. Our findings are limited by a non-responder rate of 45%. Individual comments suggest that this is a grey area and attitudes differ widely. A debate about the definition of ‘original research’ in the context of SRs is warranted.

Introduction

Since the first comparative study to answer a therapeutic question by James Lind in the 18th century [1], the number of medical research studies is ever increasing. Accordingly, the need to synthesize research evidence has been recognized for over two centuries. A more formal approach to systematically synthesizing the research evidence that accumulates in medical science in a systematic review was not developed until the late 20th century and gained momentum with the advent of evidence-based medicine. [2] Besides providing a comprehensive overview, systematic reviews help to identify areas where further research is needed or, inversely, might be unnecessary or even unethical [3].

Evidence-based medicine has been called a “new paradigm” because it asks questions about health care in an answerable format and considers the best evidence available from clinical research. In order to keep abreast of the large quantity of new data being generated continuously, systematic reviews have become the central and indispensable tool of evidence-based medicine [4].

In contrast to classical narrative reviews, systematic reviews use an explicit and rigorous methodology. They start with a clearly stated set of clinically relevant questions and pre-defined criteria for study inclusion. The scientific literature is then systematically searched with the aim of identifying all potentially relevant studies. After application of eligibility criteria, the included studies are assessed for their internal validity, in particular the risk of bias. If possible, data are combined using meta-analytic methods. [5] By statistically combining information from all or part of the included studies, meta-analyses can provide pooled estimates that are more precise than those derived from individual studies. [6] Presence or absence of a meta-analysis does not represent a quality criterion since it is directly dependent on the studies identified and data available for inclusion in the systematic review. Finally, results of any systematic review need to be interpreted in the light of random error and considering external validity or applicability of results.

The general concept of systematic reviews and their methodology were originally developed in the social sciences. In medicine they have been applied predominantly for the evaluation of treatments. [5] With some adaptations the general methodology of systematic reviews is also applicable to questions of diagnostic test accuracy and prognosis. [7][9] Systematic reviews usually provide more reliable findings than individual studies or non-systematic narrative reviews. [10][13] Consequently, more robust conclusions can be drawn which, in turn, may inform decision making on different levels from individual patient care to the organization of health care systems.

Over the last 10 to 15 years, the number of published systematic reviews has increased markedly. [14] When carried out before the start of new clinical studies, a systematic review can help to optimize the allocation of limited research resources. Consequently, leading funding agencies such as the UK Medical Research Council require systematic reviews as part of grant applications. [15] Leading medical journals now advocate a systematic overview of the evidence as part of published reports of new randomized trials. [16] In recent years the role of research syntheses has been further strengthened by the decision of the U.S. government in 2009 to allocate $1.1 billion to comparative-effectiveness research (CER) under the framework of the American Recovery and Reinvestment Act. [17] The Institute of Medicine (IOM) recently published two reports which underline the relevance of systematic review methodology both for CER and evidence-based clinical practice guidelines [18], [19].

Despite these important functions and the recent prominent government support, the status and value of systematic reviews is still being disputed in academia. The debate is partly fueled by persistent misconceptions. [20] In the past, systematic reviews have been dubbed “secondary research” in contrast to “primary or original research”, implying that they were less scientifically novel and required less methodological rigor than studies deemed primary research. Early opponents even spoke of “mega-silliness” and “statistical alchemy for the 21st century” [21].

At present there is considerable heterogeneity across countries with regard to both the funding available for systematic reviews and their academic recognition. While in some countries, such as the U.K. and The Netherlands, medical faculties have established professorships and academic units for systematic reviews the related methods still play only a minor role in medical student education in other countries. Similarly, the way methods of research synthesis are adopted and used varies widely across medical specialty fields.

Scientific journals play an important role in the dissemination of scientific knowledge by setting quality standards and determining the way in which research is being published. Journal editors can thus be considered gatekeepers. How they appraise a given type of research activity influences the recognition it receives in the scientific community. Given the influential role of journal editors, we set out to elucidate their attitudes towards systematic reviews. In particular, we were interested in the scientific status attributed to systematic reviews and their acceptability for publication in clinical journals.

Methods

We identified the 118 core clinical journals as defined by the National Library of Medicine (USA) as of April 2009 (see File S1). From the journals’ websites we retrieved the contact details of the main editorial offices and, if available, of the editors-in-chief. We sent out a short email questionnaire in April 2009 asking whether editors would

  1. consider a systematic review manuscript an original research project
  2. publish a systematic review in their journal, and
  3. in which section of their journal they would publish a systematic review.
A reminder email was sent in August 2009. For all but two journals, the editors-in-chief responded to our survey; in the remaining two journals other editorial staff answered the survey. Ethics approval was not required since editors-in-chief were free to participate in our survey and data were anonymized.

Two investigators then independently evaluated and classified the responses. If discrepancies occurred, consensus was reached in discussion with a third investigator. We extracted the ISI impact factor of the included journals from the Journal Citation Report 2009. [22] All data were collated in a spreadsheet in Microsoft Excel and used for descriptive statistics.

To characterize the group of non-responding journals, we developed a three step process that entailed 1) a PubMed search for systematic reviews classified as meta-analyses published in these journals in 2009, 2) hand-searching the content published in 2009 of journals for which we did not identify a meta-analysis in our PubMed search, 3) evaluation of author instructions of journals from point 2 (above) to determine whether they would have published systematic reviews.

Results

Seventeen (14%) of the 118 journals were general medical journals and 101 (86%) were specialty journals. The majority of the journals were published in the USA (Table 1). Editors of 65 journals (55%) responded to our survey. For three journals, not all questions were answered. The response rate was higher for editors of general medical journals (13/17; 76%) than for those of specialty journals (52/101; 52%). We received responses from 50% (51/101) of the U.S. journals and from 80% (12/15) of the British journals. The median ISI impact factor for responder journals was 2.99 (range 0.29 – 52.6) and for non-responder journals 3.61 (range 0.40 – 69.0).

thumbnail
Table 1. Journal characteristics by survey responder status.

https://doi.org/10.1371/journal.pone.0035732.t001

Status of Systematic Reviews

Seventy-one percent (46/65) of editors regarded systematic reviews as original research projects. Nine of them (29%) did so only under certain premises (Table 2). For some editors the use of Cochrane methodology [5] or meta-analytic methods were a criterion to decide whether a systematic review is considered original research. For illustration, Table 3 includes some excerpts from the responses.

thumbnail
Table 2. Status of systematic reviews – results of survey of journal editors.

https://doi.org/10.1371/journal.pone.0035732.t002

thumbnail
Table 3. Comments illustrating editors’ attitudes towards systematic reviews.

https://doi.org/10.1371/journal.pone.0035732.t003

Acceptability for Publication

Of 64 respondents, 60 (94%) published systematic reviews (Table 2). Six (9%) did so only rarely and four (6%) not at all. About a third of the journals published systematic reviews in a section dedicated to original research articles and a third in a specific section for (systematic) reviews. Some journals either featured them as special articles or placed them in other sections (Table 2).

Non-responding Journals

Of the 53 journals that did not respond to our survey, 30 (56.6%) published at least one systematic review in 2009, two (3.8%) would accept systematic reviews according to their author instructions, while 21 (39.6%) neither published any nor explicitly mention systematic reviews in their author instructions. Of those that published systematic reviews, in 18 of the 30 the systematic reviews were published in journal sections that contained original research articles.

Discussion

We surveyed editors of core clinical journals and found that most of them regarded systematic reviews as original research. Nearly all of the journals represented by these editors published systematic reviews. The respective comments of the respondents indicate that this is an ill-defined area. This was mirrored by the variety of criteria used to decide whether a submitted systematic review manuscript represented original research or not. For some respondents, the inclusion of a meta-analysis was the key argument while others looked at the methods being used.

In our set of core clinical journals the general attitude towards systematic reviews was rather positive. It is conceivable that a broader sample of biomedical journals, e.g., including basic sciences journals, would have yielded a more conservative picture. The main limitation of our survey is that about 45% of the contacted journals did not respond which could potentially significantly change the results and affect the interpretation. While the proportion of non-responding journals that published systematic reviews was lower, the majority still published at least one in 2009. Interestingly, more than half of the non-responder journals that published systematic reviews seemed to consider them original research. If one assumes that the non-responding editors are more skeptical about systematic reviews than those who responded then the results may be less positive overall.

From the large spectrum of responses we conclude that a debate about the status and the academic recognition of systematic reviews is warranted. A next step should be an in-depth analysis of the views of different stakeholders including researchers, funders, users of systematic reviews (e.g., policy makers) and again journal editors.

Ideally, the clinical research community would accept systematic reviews as a research category of its own, which is defined by methodological criteria, as is the case for other types of research. With certain quality criteria being fulfilled, systematic reviews should not be denied the appropriate academic recognition they deserve. Under these premises their scientific value should be on par with conventional original research studies. This argument becomes even more compelling when one considers that systematic reviews are essentially observational studies of aggregate or individual data from previous studies.

Due to the continuous work of The Cochrane Collaboration and other international institutions and networks, the use and recognition of systematic reviews has increased considerably over the last 15 years. [4] However, the limited funding opportunities available for systematic review projects represent a main barrier to an even wider implementation. A clarification of the scientific status of systematic reviews might motivate researchers to undertake such projects to an even larger extent. If high-quality systematic reviews are accepted as valid research projects by the research community, then funding agencies might also be more open to financially support them e.g. by creating specific grant schemes.

In conclusion, the attitudes of editors of clinical journals vary with regard to the value given to systematic reviews. Most responding editors regarded systematic reviews as original research projects based on varying criteria. This interpretation is limited by a non-responder rate of 45%. A debate about the scientific value of systematic reviews and their academic recognition is warranted and would help establish sustainable programs of evidence synthesis across countries and different fields of clinical research.

Supporting Information

File S1.

Journals invited to participate in survey.

https://doi.org/10.1371/journal.pone.0035732.s001

(DOCX)

Acknowledgments

We would like to thank Stefan Reinders and Rebecca Weida for their assistance. The open access publication of this work was supported by the Deutsche Forschungsgemeinschaft.

Author Contributions

Conceived and designed the experiments: JJM FH GA EvE. Performed the experiments: JJM FH. Analyzed the data: JJM FH EvE. Wrote the paper: JJM FH GA EvE.

References

  1. 1. Lind J (1753) A treatise of the scurvy. In: nature InthreepartsContaininganinquiryintothe, causes , cure , editors. of that disease. Together with a critical and chronological view of what has been published on the subject. Edinburgh: Sands, Murray and Cochran for A Kincaid and A Donaldson.
  2. 2. Chalmers I, Hedges LV, Cooper H (2002) A brief history of research synthesis. Eval Health Prof 25: 12–37.
  3. 3. Chalmers I, Glasziou P (2009) Avoidable waste in the production and reporting of research evidence. Lancet 374: 86–89.
  4. 4. Bastian H, Glasziou P, Chalmers I (2010) Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med 7: e1000326.
  5. 5. Higgins JP, Green S (2011) (2011) Cochrane Handbook for Systematic Reviews of Interventions; Version 5.1.0, available from http://www.cochrane-handbook.org. Chichester, UK John Wiley & Sons, Ltd.
  6. 6. Deeks J, Higgins JP, Altman D (2011) Analysing data and undertaking meta-analyses. In: Higgins JP, Green S, editors. Cochrane Handbook for systematic review of interventions. Chichester, UK: John Wiley & Sons.
  7. 7. Leeflang MM, Deeks JJ, Gatsonis C, Bossuyt PM (2008) Systematic reviews of diagnostic test accuracy. Ann Intern Med 149: 889–897.
  8. 8. Altman DG (2001) Systematic reviews of evaluations of prognostic variables. BMJ 323: 224–228.
  9. 9. Sutton AJ, Higgins JP (2008) Recent developments in meta-analysis. Stat Med 27: 625–650.
  10. 10. Egger M, Smith GD, Phillips AN (1997) Meta-analysis: principles and procedures. BMJ 315: 1533–1537.
  11. 11. Egger M, Smith GD (1997) Meta-analysis: Potentials and promise. BMJ 315: 1371–1374.
  12. 12. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC (1992) A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA 268: 240–248.
  13. 13. Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, et al. (1992) Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 327: 248–254.
  14. 14. Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG (2007) Epidemiology and reporting characteristics of systematic reviews. PLoSMed 4: e78.
  15. 15. Medical Research Council (2011) 14.07.2011: (2011) Funding Opportunities; http://www.mrc.ac.uk/Fundingopportunities/Grants/Trialgrant/Globalhealthtrials/MRC004151;.
  16. 16. Clarke M, Hopewell S, Chalmers I (2010) Clinical trials should begin and end with systematic reviews of relevant evidence: 12 years and waiting. The Lancet 376: 20–21.
  17. 17. VanLare JM, Conway PH, Sox HC (2010) Five next steps for a new national program for comparative-effectiveness research. The New England journal of medicine 362: 970–973.
  18. 18. Institute of Medicine (U.S.). Committee on Standards for Systematic Reviews of Comparative Effectiveness Research (2011) Finding What Works In Health Care: Standards For Systematic Reviews.
  19. 19. Institute of Medicine (U.S.). Committee on Standards for Systematic Reviews of Comparative Effectiveness Research (2011) Clinical Practice Guidelines We Can Trust.
  20. 20. Petticrew M (2001) Systematic reviews from astronomy to zoology: myths and misconceptions. BMJ 322: 98–101.
  21. 21. Feinstein AR (1995) Meta-analysis: statistical alchemy for the 21st century. J Clin Epidemiol 48: 71–79.
  22. 22. ISI Web of Knowledge (2009) Journal Citation Report 2008; available at http://wokinfo.com/products_tools/analytical/jcr/; accessed in March 2010.