Skip to main content

Managing overlap of primary study results across systematic reviews: practical considerations for authors of overviews of reviews

Abstract

Background

Overviews often identify and synthesise a large number of systematic reviews on the same topic, which is likely to lead to overlap (i.e. duplication) in primary studies across the reviews. Using a primary study result multiple times in the same analysis overstates its sample size and number of events, falsely leading to greater precision in the analysis. This paper aims to: (a) describe types of overlapping data that arise from the same primary studies reported across multiple reviews, (b) describe methods to identify and explain overlap of primary study data, and (c) present six case studies illustrating different approaches to manage overlap.

Methods

We first updated the search in PubMed for methods from the MOoR framework relating to overlap of primary studies. One author screened the studies titles and abstracts, and any full-text articles retrieved, extracted methods data relating to overlap of primary studies and mapped it to the overlap methods from the MOoR framework. We also describe six case studies as examples of overviews that use specific overlap methods across the steps in the conduct of an overview. For each case study, we discuss potential methodological implications in terms of limitations, efficiency, usability, and resource use.

Results

Nine methods studies were found and mapped to the methods identified by the MOoR framework to address overlap. Overlap methods were mapped across four steps in the conduct of an overview – the eligibility criteria step, the data extraction step, the assessment of risk of bias step, and the synthesis step. Our overview case studies used multiple methods to reduce overlap at different steps in the conduct of an overview.

Conclusions

Our study underlines that there is currently no standard methodological approach to deal with overlap in primary studies across reviews. The level of complexity when dealing with overlap can vary depending on the yield, trends and patterns of the included literature and the scope of the overview question. Choosing a method might be dependent on the number of included reviews and their primary studies. Gaps in evaluation of methods to address overlap were found and further investigation in this area is needed.

Peer Review reports

Background

Navigating the expanding body of research literature is an increasing challenge for health practitioners, researchers and decision-makers. Global research output as a whole is growing rapidly, and it is estimated that every 9 years, publications in Web of Science double [1]. The number of published systematic reviews being produced yearly is also expanding [2, 3], and duplication of reviews on similar topics is common. For example, Doundoulakis et al. [4] found 57 meta-analyses on direct oral anticoagulants for stroke prevention in atrial fibrillation. Their inclusion criteria were meta-analyses with comprehensive search strategies and risk of bias assessments. If their eligibility criteria had been less restrictive, over 100 meta-analyses would have been found on this topic. Faced with a large volume of systematic reviews on the same topic, healthcare providers need a method to make sense of potentially conflicting, discrepant and overlapping information of varying quality [3].

Overviews of systematic reviews (i.e. umbrella reviews, meta-reviews, reviews of reviews, or reviews of meta-analyses [henceforth called overviews] [5]), offer a solution to this challenge by proposing a method to synthesize the results and conclusions at the systematic review level [6, 7]. Overviews are increasing in volume in response to the growing number of systematic reviews. From 2000 to 2020, 1218 overviews were published, the majority of which (886/1218 [73%]) were published in the most recent 5 year period (2016-2020) [8].

Overviews often identify and synthesise a large number of systematic reviews on the same topic, which is likely to lead to overlap (i.e. duplication) in primary studies across the reviews. For example, a broad overview of 16 natural therapies, found largely to be ineffective, led to changing Australia’s Private Health Insurance Act of 2007 [9]. Overlap can arise when systematic reviews on the same topic include one or more identical primary studies (e.g. randomised control trials (RCTs), cohort and cross-sectional studies). The overlapping data from the same primary studies reported across multiple systematic reviews may include: overlapping risk of bias assessments, overlapping pooled effect estimates across similar outcomes, overlapping meta-analysis results (e.g. I2 heterogeneity statistics), or overlapping certainty of the evidence assessments (e.g. Grading of Recommendations, Assessment, Development and Evaluations (GRADE)) [6, 7].

Example of overlapping primary study data

Synthesising systematic reviews with overlapping primary study data is a challenge for overview authors. As an example, we present three reviews included in a fictional overview, each with a set of RCTs, which are indicated by the coloured boxes in Fig. 1. One of the potential options for dealing with the overlap in RCTs is to base the results on only one systematic review using methodological criteria to select that review, for example, choose the review with the greatest number of trials (i.e. Review 1 with 8 trials). However, this 2008 review is out of date, and leaves out the 4 more recent trials. Alternatively, overview authors could choose Review 2 with the highest quality. However, this high quality review omits 6 trials. A third option is to include the most recent review, Review 3, but again 6 trials would be omitted.

Fig. 1
figure 1

Three reviews included in a fictional overview with overlapping randomised control trials (RCTs)

Alternatively, all the reviews could be included, which then involves quantifying the overlap and considering its influence when summarizing the results across the reviews (narratively or statistically). The methods used to deal with overlap of trials as highlighted here, could influence the interpretation of results and conclusions of the overview.

Overlap is a problem of precision related to sampling (i.e. it is not a bias). The inclusion of the same primary study in more than one systematic review gives undue weight to this study. Using a study multiple times in the same analysis overstates its sample size and number of events, falsely leading to greater precision in the analysis. This may impact both narrative description of the results, or a statistical synthesis (e.g. including the results from a primary study more than once in the same meta-analysis).

Methods development

Mapping, evaluation and development of methods used in overviews has grown over the last 12 years [10,11,12,13,14,15,16]. A systematic map of all methods used in overviews, called the MOoR framework, identified nine methods to manage the overlap used across four steps in the conduct of an overview [7, 16]. Since publication of the MOoR framework, methods for overlap have been published [17,18,19]. These methods are currently being used in practice, and overview guidance has been recently updated [19, 20], but there has been limited evaluation of these methods.

This paper aims to: (a) describe the different types of overlapping data that arise from the same primary studies reported across multiple reviews, (b) describe methods to identify and explain overlap of primary study data, and (c) present six case studies illustrating different approaches to manage overlap.

Methods

We first updated the search for methods from the MOoR framework [7, 16, 21] relating to overlap of primary studies. We conducted a search in PubMed using the following algorithm: (method*[TI] OR meta-epidemiol*) in combination with the Boolean operator AND, and the search filter for overviews developed by Lunny et al. [5] (Additional file 1: Appendix A). We also conducted forward citation searching on a key study from 2014 [13] dealing with overlap using Google Scholar. Search dates were from January 2016 to March 2020.

We considered articles eligible for inclusion if they described methods used to manage overlapping data across primary studies in overviews of health interventions.

Inclusion criteria:

  1. a.

    Articles describing methods for overviews of systematic reviews of health interventions

  2. b.

    Articles examining methods used in a cross-section or cohort of overviews

  3. c.

    Guidance (e.g. handbooks and guidelines) for undertaking overviews

  4. d.

    Commentaries or editorials that discuss methods for overviews

Exclusion criteria:

  1. a.

    Articles published in languages other than English

  2. b.

    Articles describing methods for network meta-analysis

  3. c.

    Protocols or registered reports

  4. d.

    Articles exclusively about methods for overviews of other review types (i.e. not of interventions)

One author screened the studies titles and abstracts, and any full-text articles retrieved, against the inclusion criteria. One author extracted methods data relating to overlap of primary studies and mapped it to the overlap methods from the MOoR framework (Additional file 1: Appendix B). Characteristics of studies were extracted by one reviewer, as well as the characteristics of the case studies. Results are presented descriptively and in tables.

We also describe six case studies [22,23,24,25,26,27] as examples of overviews that use specific methods across the steps in the conduct of an overview. The case studies were selectively chosen based on the variety of different approaches used to manage overlap. For each case study, we will discuss potential implications in terms of methodological limitations, efficiency, usability, and resource use.

Results

Screening results

Our search strategy retrieved 119 unique records, and the forward citation searching retrieved 92 citations (Fig. 2). After deduplication 199 remained, and after screening abstracts and full text, seven were included [17,18,19, 28,29,30,31]. One additional conference citation was found through expert knowledge on the topic [32], and one paper recently published was included after completion of the first draft [33].

Fig. 2
figure 2

Flowchart of included studies

Characteristics of methods studies

Six studies were articles describing methods for overviews, two were guidance documents, and one was an empirical study (Table 1).

Table 1 Characteristics of methods studies on overlapping primary study data across reviews

Methods studies identified and mapped to the MOoR framework

Nine studies were mapped to the methods identified by the MOoR framework to address overlap (Table 2). Several methods map across four steps in the conduct of an overview – the eligibility criteria step, the data extraction step, the assessment of risk of bias step, and the synthesis step. Seven of the nine studies looked at methods at the eligibility criteria step. One of these studies empirically evaluated the impact of five inclusion decisions on the conduct of an overview [19]. The authors found that when overviews contain overlapping primary studies, selecting a Cochrane systematic review, as opposed to the most recent or highest quality non-Cochrane review, maximized the amount of outcome data included in the overview [19].

Table 2 Methods identified from the MOoR Framework mapped to newly identified studies

At the eligibility criteria step, one common strategy is to limit the number of included reviews in the overview. This method can be addressed by selecting one, or a subset of systematic reviews from multiple addressing the same question using pre-specified quality criteria (e.g. select systematic reviews that are of high quality) or take a particular methodological approach (e.g. select systematic reviews with meta-analysis of four or more primary studies) (Table 2).

To determine if overlap is present, methods proposed in the data extraction step can allow abstraction of data required for assessment of the extent of the overlap across systematic reviews. Overlapping data from primary studies can then be managed using multiple methods at the synthesis step, including the options to: (a) use decision rules to select results for analysis from one, or a subset of systematic reviews, (b) determine methods for quantifying overlap, or (c) use statistical approaches to deal with overlap. Many methods such as quantifying overlap using the corrected covered area (CCA [13]), or visually examining and presenting overlap of the primary studies across systematic reviews may not directly address the issue but may provide data on the nature and extent of the problem.

Overlap in information can also arise from duplicate risk of bias/quality assessments, or duplicate GRADE outcome assessments. Risk of bias data from the same primary study can differ between what is reported in systematic reviews due to:

  • error in data extraction [34]

  • data extracted from different sources for the same primary study (e.g. different reports, unpublished data) [35]

  • data retrieved/not retrieved by contacting primary study authors [36]

  • different tools used to assess risk of bias which leads to missing or inconsistent information about potential biases [37], and/or

  • conflicting information reported to support judgements for the same risk of bias tool [38].

Discrepant and overlapping risk of bias assessments across systematic reviews can be resolved by: (a) extracting the risk of bias assessment of primary studies from the included systematic reviews, using data extraction approaches to manage missing, flawed assessments, or discrepant assessments of the same primary study; or (b) re-assessing all primary studies using a common risk of bias tool. The data extraction approaches outlined in the MOoR framework to manage discrepant data across systematic reviews involve retrieving either published or registry reports of the primary studies, or contacting systematic review or primary study authors, or both, for clarification regarding discrepancies [7, 16].

Authors can take additional steps to manage overlapping information and data at the synthesis stage [7, 16]. Two non-statistical methods for resolving overlap in primary studies were identified in the MOoR framework:

  • Select the result of one (or a subset of) systematic reviews with or without meta-analysis using a decision rule or a published algorithm [39,40,41]

  • Identify systematic reviews with or without meta-analysis with 25% or more of their research in common and eliminate the one with the fewer studies [11]

Three statistical methods for addressing the overlap in primary study data across systematic reviews were identified in the MOoR framework:

  • Conduct sensitivity analyses (e.g. second-order meta-analysis (MA) including all MAs irrespective of overlap compared with second-order MA including only MAs where there is no overlap in primary studies) [11]

  • Inflate the variance of the MA estimate [42]; that is, an inflation factor of J can be multiplied with the second order MA variance to correct for the underestimated variance estimator.

Presentation of case studies

Choosing between overlap methods often depends on the type of review being conducted and the clinical topic being investigated. To illustrate how authors used these methods, we now present six case studies to illustrate examples of methods used at different steps in the conduct of an overview, with a commentary on potential implications of the methods in terms of methodological limitations, efficiency, usability, and resource use.

1) Bidonde et al. exercise for adults with fibromyalgia

The overview by Bidonde [22] evaluates physical activity interventions for adults with fibromyalgia with a focus on four outcomes: pain, multidimensional function, physical function and adverse effects. To restrict the number of included reviews, the authors first selected only reviews meeting three or more of Cochrane’s criteria of a systematic review. These were: (a) a focused question (i.e., contains PICO [population, intervention, comparison, outcome] statement); (b) a comprehensive and explicit search (i.e., more than one database and other sources searched, keywords or Mesh terms given); (c) the use of explicit criteria to include and exclude RCTs; (d) explicit methods of extracting and synthesising study findings (quantitative); and (e) inferences made were evidence based.

The authors chose to deal with overlap at the synthesis, presentation and summary of findings step using quantification of the amount of overlap and presenting the results (Table 2). The authors counted 29 (48%) RCTs overlapping across 9 reviews, 31 (52%) of which were ‘unique’ RCTs, and presented the data in tables. For each review, the number of included RCTs was reported, followed by the number of overlapping RCTs between the review and any other reviews. Here is an exemplary quote illustrating this: “[The overview by] Kelley 2010 included seven RCTs: three overlapped with [the overview by] Bidonde, six with [the overview by] Hauser, three with [the overview by] Lima, one with McVeigh, and two with Ramel”.

The methods described here to deal with overlapping primary studies are resource-friendly. The method to restrict inclusion to systematic reviews help reduce the amount of overlap and during the synthesis stage, overlap is quantified. Neither of these methods resolve the problem of overlap, in the same way that judging studies at low or high quality does not resolve the issue of the inclusion of low quality evidence in a review. As with quality appraisal, overlap should be minimised, quantified and used to contextualise the results and conclusions of the overview. Ordering results by amount of overlap may increase the prominence of studies with low overlap, focusing attention on the results that should most influence conclusions. Synthesis of the results of reviews can be limited to those with little or no overlap as a sensitivity analysis.

2) Patnode et al. tobacco cessation in adults

We present the case of an overview by Patnode and colleagues [27], which aimed to compare and synthesise systematic review-level evidence of the effectiveness and safety of pharmacotherapy and behavioral tobacco cessation interventions among adults, including pregnant women and those with mental health conditions. Patnode [27] used nine methods to deal with the overlap in primary studies across 54 included systematic reviews across four steps in the conduct of an overview (Table 2). To manage the overlap in primary studies, the authors chose to exclude non-systematic reviews. At the data extraction and the assessment of risk of bias steps, they chose to exclude all low quality systematic reviews. Quality of the reviews was assessed using the AMSTAR tool [43]. These methods restrict the number of systematic reviews that the authors must analyse at the synthesis step.

At the synthesis step, the authors developed a decision rule. If multiple good quality systematic reviews were identified, the decision rule was applied to determine which systematic review represented the most comprehensive, up-to-date literature base and highest quality to serve as the basis for the main findings (called “primary reviews”). To examine and quantify the amount of overlap across the included reviews at the synthesis step, included primary studies within each systematic review were compared to evaluate the comprehensiveness of each review and duplication in the included primary literature. Finally, overlap was visually presented in tables and figures.

The findings of the Pollock study [19] suggest that eliminating systematic reviews may lead to loss of information. However, without specific empirical testing, it is not known whether the Patnode [27] overview suffered from a loss of information from eliminating low quality reviews. If Patnode [27] had included all systematic reviews then older, less comprehensive, and low-quality systematic reviews would have been included, thus introducing untrustworthy evidence into the results of the overview. By limiting their findings to high quality and comprehensive systematic reviews, Patnode and colleagues [27] gain efficiency in the overview production, reduce human resources needed to synthesise a large number of overviews, and produce an overview that is potentially more readable and useable.

3) Murphy et al. self-management interventions in chronic obstructive pulmonary disease

We describe the overview by Murphy [23] that aimed to determine the clinical effectiveness of self-management interventions for adults with chronic obstructive pulmonary disease (COPD). Self-management interventions were defined as “structured and personalized, and often multi-component, with goals of motivating, engaging and supporting the patients to positively adapt their health behaviors and develop skills to better manage their disease” [44]. Murphy et al. used five methods, across the four steps in the conduct of an overview, to manage the overlap in 165 unique primary studies across the 16 included systematic reviews (Table 2). To manage overlap of primary studies, called “crossover” by the authors, they first included all studies based on pre-determined eligibility criteria. During data extraction and assessment of risk of bias stage, overlap across systematic reviews was assessed (Table 2). In the case of substantial overlap (over 70%), the higher quality review (using R-AMSTAR [45]) was selected if it was published the same year or more recently than the comparison reviews.

Overlap of primary studies was visually presented in tables. Overlap was calculated as the proportion of primary studies from one systematic review found in another, however this was not explicitly stated in the methods. In not explicitly reporting how overlap, or crossover, was calculated, reproducibility is jeopardized. Furthermore, the authors do not report the reference review for calculation of percentage overlap. Without knowledge of the reference review, percentage overlap is not reproducible [46]. Finally, the table reporting the overlap of RCTs across reviews has no legend to guide the reader in its interpretation. Given the multiple methods for managing overlap, authors of overview should explicitly and entirely state methods used in calculation and assessment of overlap.

Murphy et al. excluded two systematic reviews for high overlap (Bentsen and Harrison). The higher quality (and thus included) reviews (Zwerink and Jordan) included significantly more primary studies.

In both cases, systematic reviews with a low number of primary studies (Bentsen et Harrison) were excluded in favour of systematic reviews with significantly more primary studies (Zwerink et Jordan). During data synthesis, the authors noted that meta-analysis at the overview level would be inappropriate given the high percentage of overall overlap found.

In summary, the management of overlap focused on the data extraction, assessment of risk of bias and synthesis stages whilst maintaining broad eligibility criteria. Transparent reporting of methods dealing with overlap are necessary to interpret and reproduce results of overlap in overviews [46]. In selecting the highest quality review with under 70% overlap, Murphy et al. minimize the impact of duplicate primary studies in their overview. By limiting the inclusion to current, high quality reviews, Murphy and colleagues risk loss of information (Pollock [19]), but gain in utility, efficiency, and requiring potentially less resources.

4) Prousali et al. efficacy and safety of interventions to control myopia progression in children

This case study involves an overview evaluating interventions aimed at slowing the progression of myopia in children [24]. Prevention of blindness and visual impairment from myopia was prioritised by the World Health Organisation (WHO)‘s VISION 2020 campaign [47]. This overview identified 18 systematic reviews that synthesized the efficacy of a variety of interventions for myopia from 44 unique primary studies (1989–2016). Prousali et al. used five methods across the four steps in the conduct of an overview to address overlap (Table 2). Eligible reviews had to meet pre-defined eligibility criteria for inclusion, which included systematic reviews of primary studies enrolling children or adolescents ≤18 years of age with myopia. Reviews without systematic search strategies or risk of bias assessment of primary studies were excluded. Interventions had to be optical or pharmacological and compared to single vision glasses, contact lenses or placebo. Primary outcomes were myopia progression and axial elongation.

A citation matrix was presented that identified primary studies that were included in more than one review. Overlap was quantified at the review level (as opposed to the outcome level) using the CCA [13] and high overlap was defined as equal to or more than 10%. If a review contained high overlap, reviews were retained that were (1) the most recent, (2) contained the highest amount of information (not defined by the authors), and (3) had the lowest risk of bias using ROBIS [48] and GRADE [49]. A meta-analysis was conducted using the unique primary studies included in the reviews. Overlap was considered moderate in this overview and was estimated at 6.2% using the CCA method. Since overlap was estimated at below 10%, all included reviews were retained in the analysis.

The authors used the CCA to assess overlap and state that it was “moderate” without explaining how moderate overlap may affect the overview’s findings. The authors then go on to perform a new meta-analysis, thus ignoring their overlap analysis, and removing the challenge of overlap from the equation. While conducting a ‘de novo’ meta-analysis eliminates the problem of very high or very low overlap, it is more resource intensive, may not always be feasible, and may introduce indirectness (i.e. lack of applicability) as the primary studies were not screened against the overview’s primary objective and PICO eligibility criteria [50]. When there are significant differences between the PICO of the overview and the PICO of the primary studies included in the systematic reviews, certainty in the evidence decreases. A better strategy would have been to conduct a systematic review with the aim to include primary studies directed related to the Prousali's research question, thus eliminating any problem of indirectness.

5) Ryan et al. interventions to improve safe and effective medicines use by consumers

In this updated Cochrane overview of interventions to improve safe and effective medicines use by consumers [25], the authors searched the Cochrane database of systematic reviews (CDSR) and the Database of Abstracts of Reviews of Effects (DARE [51];). DARE contains summaries of methods and conclusions of included systematic reviews including a quality assessment. The authors deal with overlap at the eligibility criteria step using two methods, and one method at the assessment of risk of bias step (Table 2). Although it is common for Cochrane overviews to include only Cochrane reviews, Ryan et al. did not impose this restriction. All Cochrane reviews meeting the overviews inclusions criteria were included, and non-Cochrane reviews were excluded if they had substantial duplication of content with Cochrane reviews and were of low quality. Non-Cochrane reviews were included that were not within the PICO scope covered by Cochrane reviews. The rationale for prioritising inclusion of Cochrane reviews over non-Cochrane reviews was that Cochrane reviews are regularly updated and are of higher quality [52,53,54,55].

Quality of the included systematic reviews was rated using two scales; the Centre for Reviews and Dissemination assessment as part of the DARE, and the author’s assessment as judged using AMSTAR [43]. If reviews were classified as low quality with DARE or AMSTAR (≤4 out of 11 possible points), they were excluded. Further, non-Cochrane reviews with more than a half of their studies already included in a Cochrane review were excluded. At the synthesis stage, Ryan et al. retained one Cochrane review with unique content thus eliminating any issues related to overlapping primary study data (i.e. analysis of one high quality and comprehensive systematic review using decision rules (Table 2)).

There are two major advantages to including only relevant Cochrane reviews in an overview. First, quality, recency and comprehensiveness is higher in Cochrane reviews when compared to non-Cochrane reviews [52,53,54,55]. Second, according to the Cochrane policy, two reviews cannot be published on the same topic, resulting in a decreased risk of overlapping primary studies across reviews. Third, an empirical study found that selecting a Cochrane systematic review, as opposed to the most recent or highest quality non-Cochrane systematic review, maximized the amount of outcome data included in an overview [19], making that overview more relevant and useful for decision makers. However, including only Cochrane reviews of RCTs may limit data from highly relevant qualitative or quantitative reviews of observational studies.

6) Thabet et al. clinical and pharmacokinetic/dynamic outcomes of prolonged infusions of beta-lactam antimicrobials

Thabet et al. conducted an overview investigating the comparative efficacy of prolonged infusions beta-lactam antibiotics compared to traditional intermittent infusions in hospitalized patients with infection. The authors used four methods across three steps in the conduct of an overview to address overlap (Table 2). At the eligibility step, eligible systematic reviews must have systematically searched the literature and synthesised clinical outcomes that included mortality or clinical cure. At the data extraction step, all included systematic reviews were data extracted. Quantification and assessment of overlap occurred at the synthesis step using citation matrices and by calculating the CCA at the outcome level across all included reviews and between each pairs of reviews.

Twenty-one reviews involving 71 primary studies were included. For each of 9 clinical outcomes, a matrix of primary studies was created. Unique and overlapping studies were colour coded and overlap that was impossible due to publishing dates (i.e., primary study published after the systematic review) were also identified. Overlap was quantified using the CCA calculation across reviews for each outcome. Overlap thresholds, as determined by Pieper et al. [13], were used for interpretation of measured overlap (0–5% - slight, 6–10% - moderate, 11–15% - high, > 15% - very high). To further characterize overlap by outcome, CCA calculations for pairs of reviews were performed and presented as a grid (Fig. 3). Thabet et al. found that overlap was moderate to high for each outcome and percentage of unique references ranged from 38 to 78%. The authors suggest that the pairwise CCA grid (Fig. 3) allows the reader to identify which combinations of paired reviews had the highest overlap, while the citation matrix (Fig. 4) allows the reader to see which primary studies were common among reviews. The citation matrix also helped the authors understand why some studies were not identified by multiple reviews.

Fig. 3
figure 3

CCA calculations for pairs of reviews. Overlap categorization: 0–5% - slight (white), 6–10% - moderate (green), 11–15% - high (yellow), > 15% - very high (red)

Fig. 4
figure 4

Citation matrix. Green indicates a trial included in a review, red indicates a trial excluded or omitted from a review and black indicates that the dates of publication make a trial ineligible for inclusion in a review

In this overview, the authors explain how both high overlap and low overlap influenced the findings of the overview. By including all the systematic reviews identified by the search, data mapping exercises identified significant variability between reviews with respect to scope, quality and findings. For example, when two reviews had similar findings and low overlap for a particular outcome, the results of the reviews could be trusted despite differences in PICO scope (i.e. different populations or beta-lactams) or differences in dates of publication (i.e. publication dates spanned several years between reviews). Alternatively, when low overlap was found between reviews with inconsistent and discordant findings, discrepancies were most often attributed to differences in scope or differences in dates of review publication.

In this example, the authors chose to use the CCA and citation matrices to assess the influence of overlap on their overview findings. While this approach allows for a more comprehensive data mapping exercise it also results in the potential for more reviews to be included where discrepancies in data from primary studies and discordant findings/conclusions of authors need to be evaluated and explained.

A potential limitation of this approach is the complicated analysis of overlap by outcome. With nine clinical outcomes considered and overlap assessed for each, it may make for a cumbersome and long read for the clinician who is more interested in the bottom line. However, a complicated analysis of overlap will lead to more comprehensive and reliable findings. When deciding whether to undertake an overview, authors should take into consideration the known or anticipated trade-offs of using different overlap methods. More empirical research about the trade-offs associated with alternative overlap methodological approaches is needed.

Discussion

Our study underlines that there is no standard methodological approach to deal with overlap in primary studies across reviews. The level of complexity when dealing with overlap can vary depending on the yield, trends and patterns of the included literature and the scope of the overview question. Choosing a method might be dependent on the number of included reviews and their primary studies. For example, visual presentations of overlap becomes more challenging with an increasing number of reviews. In reviews with high yields, the breadth and depth of analysis can be challenging and resource intensive. Creating large reports with too much information and data can limit the readability and utility of an overview for decision makers and healthcare providers, and decrease the efficiency in its production.

Several possible approaches to manage overlap have been illustrated by presenting six case studies. The methodological approaches can be categorized by the stages in the conduct of an overview. For example, at the eligibility step, the trade-off of authors choosing one systematic review among many is a loss of potentially important information, which may lead to greater uncertainty about the effects of the intervention, while at the same time removing the issue of overlap. Including all systematic reviews is likely to introduce overlap, and will lead to challenges when synthesising a large amount of review data (e.g. resolving discrepant quality assessments, standardising effect metrics). When including all systematic reviews, resolving these challenges is likely to be resource intensive and cumbersome for the reader. More importantly, ignoring overlap in primary study data from the included reviews may affect the trustworthiness of the overview findings. If overlap is not addressed at the inclusion or data extraction steps, overview authors are advised to quantify and assess the influence of overlap at the synthesis stage of the overview.

As a general rule, we think the creating citation matrices are helpful. Many authors find that a citation matrix is essential to interpret the amount of overlap (e.g. using the CCA). However, better reporting of the reference review when calculating overlap, and details about how overlap is calculated is needed. Visual examination of citation matrices of primary studies included in reviews can be used to determine if low overlap is related to temporal gaps in search time frames, gaps in research topics, or how studies are clustered. Smaller citation matrices by outcome can be developed, which will aid in determining if overlap is an issue at the outcome level. More sophisticated methods to present overlap to the readers can be found in Bougioukas and colleagues [33], such as upset plots, heatmaps and node-link graphs for visualizing overlap.

Interpretation of the CCA has been an issue for many authors. First, CCA calculations for all primary studies across reviews, CCA calculations between 2 reviews only, and CCA calculations for one outcome [17] yield vastly different results. It may be the case as in Thabet et al. that pairs of systematic reviews had high overlap but the overview at a whole resulted in low or moderate overlap. Conducting all three overlap analyses can provide insight into which pairs of systematic reviews and outcomes have low overlap, thus helping authors highlight areas of trustworthy evidence but is more resource intensive.

When high overlap is found at the outcome level, only a few methods can be used to explain or resolve overlap. Examining potential reasons for different results or conclusions across reviews with high overlap can be highly informative and may resolve the issue. When overlap is still present and presents a problem, one review many be chosen for the overview synthesis. The limitations of this choice is that the one review may not represent the totality of evidence on the topic, and a loss of data may result. Statistical methods (e.g. conduct sensitivity analyses, inflate the variance of the pooled meta-analysis estimate) can also be used to manage overlap in primary studies across reviews, although these methods have not been frequently used in practice.

In general, there is a lack of empirical evidence testing these methods. We identified one study empirically evaluating the impact of five inclusion decisions on the conduct of an overview. The authors found that when overviews contain overlapping primary studies, selecting a Cochrane systematic review, as opposed to the most recent or highest quality non-Cochrane systematic review, maximized the amount of outcome data included in the overview [19]. While this study makes an important contribution to the empirical methods literature for overviews, significant gaps exist in evaluation of the methods used in the majority of steps and sub-steps in the conduct of an overview, especially methods used to resolve overlap. Evaluations of methods can provide evidence, which allows researchers to make informed choices about the most appropriate methods to use when conducting a study.

Strengths and limitations of this study

Due to resource limitations, one author screened the studies titles/abstracts, full-text articles against the eligibility criteria, and extracted methods data relating to overlap. We recognise this is limitation of our study, as relevant citations could have been missed, and data errors could have been introduced. We only searched one database for relevant studies, but because (a) two of the authors are experts in overview methods, and follow the literature closely, and (b) we did forward citation searching on a seminal paper from 2014 on overlap in overviews [13], we feel like we have captured all relevant papers on this topic.

Conclusions

Nine studies were found and mapped to the methods identified by the MOoR framework to manage overlap. Six studies were articles describing methods for overviews, two were guidance documents, and one was an empirical study. Several methods map across four steps in the conduct of an overview – the eligibility criteria step, the data extraction step, the assessment of risk of bias step, and the synthesis step. The methods can be used across the steps in the conduct of an overview depending on the nature and scope of the topic. No one standardised methodological approach exists to visualise, quantify and resolve overlap in primary studies across reviews.

Gaps in in evaluation of methods to address overlap were found and further investigation in this area is needed. Evaluation of the methods used in overviews is important as policymakers and clinicians need to be confident that the methods used to conduct overviews result in valid and reliable evidence.

Availability of data and materials

All data generated or analysed during this study are included in this published article.

References

  1. Bornmann L, Mutz R. Growth rates of modern science: a bibliometric analysis based on the number of publications and cited references. J Assoc Inf Sci Technol. 2015;66(11):2215–22 https://0-doi-org.brum.beds.ac.uk/10.1002/asi.23329.

    Article  CAS  Google Scholar 

  2. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326 https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pmed.1000326.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94(3):485–514 https://0-doi-org.brum.beds.ac.uk/10.1111/1468-0009.12210.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Doundoulakis I, Antza C, Apostolidou-Kiouti F, Akrivos E, Karvounis H, Kotsis V, et al. Overview of systematic reviews of non–vitamin K Oral anticoagulants in atrial fibrillation: evidence of publication overlap. Circulation. 2018;11(12):e004769 https://0-doi-org.brum.beds.ac.uk/10.1161/CIRCOUTCOMES.118.004769.

    PubMed  Google Scholar 

  5. Lunny C, McKenzie JE, McDonald S. Retrieval of overviews of systematic reviews in MEDLINE was improved by the development of an objectively derived and validated search strategy. J Clin Epidemiol. 2016;74:107–18 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2015.12.002.

    Article  PubMed  Google Scholar 

  6. Hunt H, Pollock A, Campbell P, Estcourt L, Brunton G. An introduction to overviews of reviews: planning a relevant research question and objective for an overview. Syst Rev. 2018;7(1):39 https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-018-0695-8.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Lunny C, Brennan SE, McDonald S, McKenzie JE. Toward a comprehensive evidence map of overview of systematic review methods: paper 1-purpose, eligibility, search and data extraction. Syst Rev. 2017;6(1):231 https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-017-0617-1.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Lunny C, Neelakant T, Chen A, Shinger G, Stevens A, Tasnim S, Sadeghipouya S, Adams S, Zheng YW, Lin L, Yang PH. Bibliometric study of ‘overviews of systematic reviews’: evaluation of prevalence, citation and impact factor. 2021. PREPRINT (Version 1) available at Research Square. https://0-doi-org.brum.beds.ac.uk/10.21203/rs.3.rs-237881/v1.

    Google Scholar 

  9. Australian Government Department of Health. Review of the Australian Government Rebate on Natural Therapies for Private Health Insurance. Canberra: Communication Branch, Department of Health; 2015.

    Google Scholar 

  10. Becker LA, Oxman AD. Chapter 22: Overviews of Reviews, in Cochrane Handbook for Systematic Reviews of Interventions, J.P. Higgins, Green, S., Editor. 2008, Cochrane. p. 607–631.

  11. Cooper H, Koenka AC. The overview of reviews: unique challenges and opportunities when research syntheses are the principal elements of new integrative scholarship. Am Psychol. 2012;67(6):446–62 https://0-doi-org.brum.beds.ac.uk/10.1037/a0027119.

    Article  PubMed  Google Scholar 

  12. Thomson D, Russell K, Becker L, Klassen T, Hartling L. The evolution of a new publication type: steps and challenges of producing overviews of reviews. Res Synth Methods. 2010;1(3–4):198–211 https://0-doi-org.brum.beds.ac.uk/10.1002/jrsm.30.

    Article  PubMed  Google Scholar 

  13. Pieper D, Antoine SL, Mathes T, Neugebauer EAM, Eikermann M. Systematic review finds overlapping reviews were not mentioned in every other overview. J Clin Epidemiol. 2014;67(4):368–75 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2013.11.007.

    Article  PubMed  Google Scholar 

  14. Hartling L, Chisholm A, Thomson D, Dryden DM. A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One. 2012;7(11):e49667 https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0049667. Epub 2012 Nov 15.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011;11(1):15 https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2288-11-15.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Lunny C, Brennan SE, McDonald S, McKenzie JE. Toward a comprehensive evidence map of overview of systematic review methods: paper 2-risk of bias assessment; synthesis, presentation and summary of the findings; and assessment of the certainty of the evidence. Syst Rev. 2018;7(1):159 https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-018-0784-8.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Hennessy EA, Johnson BT. Examining overlap of included studies in meta-reviews: guidance for using the corrected covered area index. Res Synth Methods. 2020;11(1):134–45 https://0-doi-org.brum.beds.ac.uk/10.1002/jrsm.1390.

    Article  PubMed  Google Scholar 

  18. Pollock A, Campbell P, Brunton G, Hunt H, Estcourt L. Selecting and implementing overview methods: implications from five exemplar overviews. Syst Rev. 2017;6(1):145 https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-017-0534-3.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. The impact of different inclusion decisions on the comprehensiveness and complexity of overviews of reviews of healthcare interventions. Syst Rev. 2019;8(1):18 https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-018-0914-3.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Gates M, Gates A, Guitard S, Pollock M, Hartling L. Guidance for overviews of reviews continues to accumulate, but important challenges remain: a scoping review. Syst Rev. 2020;9(1):254 https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-020-01509-0.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Lunny C, Brennan SE, McDonald S, McKenzie JE. Evidence map of studies evaluating methods for conducting, interpreting and reporting overviews of systematic reviews of interventions: rationale and design. Syst Rev. 2016;5(1):4 https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-015-0178-0.

    Article  PubMed  PubMed Central  Google Scholar 

  22. Bidonde J, Busch A, Bath B, Milosavljevic S. Exercise for adults with fibromyalgia: an umbrella systematic review with synthesis of best evidence. Curr Rheumatol Rev. 2014;10(1):45–79 https://0-doi-org.brum.beds.ac.uk/10.2174/1573403X10666140914155304.

    Article  PubMed  Google Scholar 

  23. Murphy LA, Harrington P, Taylor SJC, Teljeur C, Smith SM, Pinnock H, et al. Clinical-effectiveness of self-management interventions in chronic obstructive pulmonary disease: an overview of reviews. Chron Respir Dis. 2017;14(3):276–88 https://0-doi-org.brum.beds.ac.uk/10.1177/1479972316687208.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Prousali E, Haidich AB, Fontalis A, Ziakas N, Brazitikos P, Mataftsi A. Efficacy and safety of interventions to control myopia progression in children: an overview of systematic reviews and meta-analyses. BMC Ophthalmol. 2019;19(1):106 https://0-doi-org.brum.beds.ac.uk/10.1186/s12886-019-1112-3.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Ryan R, Santesso N, Lowe D, Hill S, Grimshaw J, Prictor M, et al. Interventions to improve safe and effective medicines use by consumers: an overview of systematic reviews. Cochrane Database Syst Rev. 2014;2014(4):CD007768 https://0-doi-org.brum.beds.ac.uk/10.1002/14651858.CD007768.pub3.

    PubMed Central  Google Scholar 

  26. Thabet P, Joshi A, MacDonald E, Hutton B, Cheng W, Stevens A, et al. Clinical and pharmacokinetic/dynamic outcomes of prolonged infusions of beta-lactam antimicrobials: An overview of systematic reviews. PLoS One. 2021;16(1):e0244966 https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0244966.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Patnode CD, et al., U.S. Preventive Services Task Force Evidence Syntheses, formerly Systematic Evidence Reviews, in Behavioral Counseling and Pharmacotherapy Interventions for Tobacco Cessation in Adults, Including Pregnant Women: A Review of Reviews for the U.S. Preventive Services Task Force. Rockville: Agency for Healthcare Research and Quality (US); 2015.

  28. Pollock, M., Fernandez, R.M., Becker, L., Pieper, D., Hartling, L., Chapter V: Overviews of reviews, in Cochrane Handbook for Systematic Reviews of Interventions version 6.0 (updated July 2019), T.J. Higgins JPT, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors), Editor. 2019, Cochrane. Available from www.training.cochrane.org/handbook.

  29. Ballard M, Montgomery P. Risk of bias in overviews of reviews: a scoping review of methodological guidance and four-item checklist. Res Synth Methods. 2017;8(1):92–108 https://0-doi-org.brum.beds.ac.uk/10.1002/jrsm.1229.

    Article  PubMed  Google Scholar 

  30. Hennessy EA, Johnson BT, Keenan C. Best practice guidelines and essential methodological steps to conduct rigorous and systematic meta-reviews. Appl Psychol Health Well Being. 2019;11(3):353–81 https://0-doi-org.brum.beds.ac.uk/10.1111/aphw.12169.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. A decision tool to help researchers make decisions about including systematic reviews in overviews of reviews of healthcare interventions. Syst Rev. 2019;8(1):29 https://0-doi-org.brum.beds.ac.uk/10.1186/s13643-018-0768-8.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Pérez-Bracchiglione J, Niño de Guzmán E, Roqué Figuls M, Urrútia G. Graphical representation of overlap degree of primary studies in systematic reviews in overviews [abstract OS29.1], in Cochrane Colloquium 2019. Santiago; 2019. https://colloquium2019.cochrane.org/abstracts/graphical-representation-overlap-degree-primary-studies-systematic-reviews-included

  33. Bougioukas KI, Vounzoulaki E, Mantsiou CD, Savvides ED, Karakosta C, Diakonidis T, et al. Methods for depicting overlap in overviews of systematic reviews: An introduction to static tabular and graphical displays. J Clin Epidemiol. 2021;132:34–45 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2020.12.004. Epub 2020 Dec 9.

    Article  PubMed  Google Scholar 

  34. Mathes T, Klaßen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152 https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-017-0431-4.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2019.09.003.

    Article  PubMed  Google Scholar 

  36. Reynders RM, Ladu L, Di Girolamo N. Contacting of authors modified crucial outcomes of systematic reviews but was poorly reported, not systematic, and produced conflicting results. J Clin Epidemiol. 2019;115:64–76 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2019.07.001.

    Article  Google Scholar 

  37. Voss PH, Rehfuess EA. Quality appraisal in systematic reviews of public health interventions: an empirical study on the impact of choice of tool on meta-analysis. J Epidemiol Community Health. 2013;67(1):98–104 https://0-doi-org.brum.beds.ac.uk/10.1136/jech-2011-200940.

    Article  PubMed  Google Scholar 

  38. Puljak L, Ramic I, Arriola Naharro C, Brezova J, Lin YC, Surdila AA, et al. Cochrane risk of bias tool was used inadequately in the majority of non-Cochrane systematic reviews. J Clin Epidemiol. 2020;123:114–9 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2020.03.019.

    Article  PubMed  Google Scholar 

  39. Moja L, Fernandez del Rio MP, Banzi R, Cusi C, D’Amico R, Liberati A, et al. Multiple systematic reviews: methods for assessing discordances of results. Intern Emerg Med. 2012;7(6):563–8 https://0-doi-org.brum.beds.ac.uk/10.1007/s11739-012-0846-1.

    Article  PubMed  Google Scholar 

  40. Jadad AR, Cook DJ, Browman GP. A guide to interpreting discordant systematic reviews. Cmaj. 1997;156(10):1411–6.

    CAS  PubMed  PubMed Central  Google Scholar 

  41. Bolland MJ, Grey A, Reid IR. Differences in overlapping meta-analyses of vitamin D supplements and falls. J Clin Endocrinol Metab. 2014;99(11):4265–72.

    Article  CAS  PubMed  Google Scholar 

  42. Tang LL, Caudy M, Taxman F. A statistical method for synthesizing meta-analyses. Comput Math Methods Med. 2013;2013:732989.

    PubMed  PubMed Central  Google Scholar 

  43. Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7(1):10 https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2288-7-10.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Effing TW, Vercoulen JH, Bourbeau J, Trappenburg J, Lenferink A, Cafarella P, et al. Definition of a COPD self-management intervention: international expert group consensus. Eur Respir J. 2016;48(1):46–54 https://0-doi-org.brum.beds.ac.uk/10.1183/13993003.00025-2016.

    Article  PubMed  Google Scholar 

  45. Kung J, Chiappelli F, Cajulis OO, Avezova R, Kossan G, Chew L, et al. From systematic reviews to clinical recommendations for evidence-based health care: validation of revised assessment of multiple systematic reviews (R-AMSTAR) for grading of clinical relevance. Open Dent J. 2010;4:84–91 https://0-doi-org.brum.beds.ac.uk/10.2174/1874210601004020084.

    PubMed  PubMed Central  Google Scholar 

  46. Lunny C, Brennan SE, Reid J, McDonald S, McKenzie JE. Overviews of reviews incompletely report methods for handling overlapping, discordant, and problematic data. J Clin Epidemiol. 2020;118:69–85 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2019.09.025.

    Article  PubMed  Google Scholar 

  47. Pizzarello L, Abiose A, Ffytche T, Duerksen R, Thulasiraj R, Taylor H, et al. VISION 2020: the right to sight: a global initiative to eliminate avoidable blindness. Arch Ophthalmol. 2004;122(4):615–20 https://0-doi-org.brum.beds.ac.uk/10.1001/archopht.122.4.615.

    Article  PubMed  Google Scholar 

  48. Whiting P, Savović J, Higgins JP, Caldwell DM, Reeves BC, Shea B, et al. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol. 2016;69:225–34 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2015.06.005.

    Article  PubMed  PubMed Central  Google Scholar 

  49. Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924–6 https://0-doi-org.brum.beds.ac.uk/10.1136/bmj.39489.470347.AD.

    Article  PubMed  PubMed Central  Google Scholar 

  50. NHMRC. Guideline for guidelines: Assessing certainty of the evidence: How valid and reliable are your conclusions? 2019, National Health and Medical Research Council, Australia Government. 2019. https://www.nhmrc.gov.au/guidelinesforguidelines/develop/assessing-certainty-evidence#_ENREF_8.

    Google Scholar 

  51. Database of Abstracts of Reviews of Effects (DARE). Quality-assessed Reviews. 1995. Available from: https://0-www-ncbi-nlm-nih-gov.brum.beds.ac.uk/books/NBK285222/.

    Google Scholar 

  52. Fleming PS, Seehra J, Polychronopoulou A, Fedorowicz Z, Pandis N. Cochrane and non-Cochrane systematic reviews in leading orthodontic journals: a quality paradigm? Eur J Orthod. 2013;35(2):244–8 https://0-doi-org.brum.beds.ac.uk/10.1093/ejo/cjs016.

    Article  PubMed  Google Scholar 

  53. Lundh A, Knijnenburg SL, Jørgensen AW, van Dalen EC, Kremer LCM. Quality of systematic reviews in pediatric oncology--a systematic review. Cancer Treat Rev. 2009;35(8):645–52 https://0-doi-org.brum.beds.ac.uk/10.1016/j.ctrv.2009.08.010.

    Article  PubMed  Google Scholar 

  54. Moseley AM, Elkins MR, Herbert RD, Maher CG, Sherrington C. Cochrane reviews used more rigorous methods than non-Cochrane reviews: survey of systematic reviews in physiotherapy. J Clin Epidemiol. 2009;62(10):1021–30 https://0-doi-org.brum.beds.ac.uk/10.1016/j.jclinepi.2008.09.018.

    Article  PubMed  Google Scholar 

  55. Windsor B, Popovich I, Jordan V, Showell M, Shea B, Farquhar C. Methodological quality of systematic reviews in subfertility: a comparison of Cochrane and non-Cochrane systematic reviews in assisted reproductive technologies. Hum Reprod. 2012;27(12):3460–6 https://0-doi-org.brum.beds.ac.uk/10.1093/humrep/des342.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

There was no funding to conduct this study.

Author information

Authors and Affiliations

Authors

Contributions

CL conceived of the study, conducted the search, all other methods and analysis, and presented one case study. DP and SK presented two case studies each, and PT presented one case study. CL drafted the first draft of the manuscript. All authors revising it critically for important intellectual content, read and approved the final manuscript.

Corresponding author

Correspondence to Dawid Pieper.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lunny, C., Pieper, D., Thabet, P. et al. Managing overlap of primary study results across systematic reviews: practical considerations for authors of overviews of reviews. BMC Med Res Methodol 21, 140 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-021-01269-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12874-021-01269-y

Keywords