Skip to main content

Using the contribution matrix to evaluate complex study limitations in a network meta-analysis: a case study of bipolar maintenance pharmacotherapy review

Abstract

Background

Limitations in the primary studies constitute one important factor to be considered in the grading of recommendations assessment, development, and evaluation (GRADE) system of rating quality of evidence. However, in the network meta-analysis (NMA), such evaluation poses a special challenge because each network estimate receives different amounts of contributions from various studies via direct as well as indirect routes and because some biases have directions whose repercussion in the network can be complicated.

Findings

In this report we use the NMA of maintenance pharmacotherapy of bipolar disorder (17 interventions, 33 studies) and demonstrate how to quantitatively evaluate the impact of study limitations using netweight, a STATA command for NMA. For each network estimate, the percentage of contributions from direct comparisons at high, moderate or low risk of bias were quantified, respectively. This method has proven flexible enough to accommodate complex biases with direction, such as the one due to the enrichment design seen in some trials of bipolar maintenance pharmacotherapy.

Conclusions

Using netweight, therefore, we can evaluate in a transparent and quantitative manner how study limitations of individual studies in the NMA impact on the quality of evidence of each network estimate, even when such limitations have clear directions.

Background

The number of network meta-analyses (NMA) has been increasing rapidly in recent years [1], and concomitantly the methodology for NMA is also quickly developing and expanding. One of the most important topics around NMA currently is how we should assess the quality of evidence provided by NMA. Two papers have been published recently that attempt to apply the grading of recommendations assessment, development, and evaluation (GRADE) system of rating quality of evidence to NMA [2, 3].

According to GRADE, various components impact on the quality of findings from systematic reviews. Limitations in the primary studies constitute one important factor that can influence the quality of the pooled estimates. In traditional pairwise meta-analyses, the evaluation of study limitations of the included studies is fairly straightforward because one can visualise each study’s risks of bias in a table format and then evaluate their contributions to the pairwise meta-analytic results directly. On the other hand, NMA poses a special challenge in this assessment because different NMA estimates receive different amounts of contributions from all the studies in the network via direct as well as indirect contributions, and their respective contributions are not apparent.

The method proposed by Puhan et al. [2] rates the quality of evidence separately for direct and indirect estimates, and each rating is more impressionistic than quantitative. Moreover, when the network has many nodes and is more complex than triangular, they recommend focusing on the so-called first order loop (i.e. the triangular loop) for examination of the indirect estimates and suggests using the higher of the two ratings as the rating of the network estimate. In other words this method fails to take into account the remaining contributions. The authors therefore calls for research in how to use weights of individual studies in evaluating quality of NMA estimates [2]. The method proposed by Salanti et al. [3] uses weights more extensively and makes more quantitative evaluations of all the involved evidence. We applied this method in a previous NMA on maintenance pharmacotherapy of bipolar disorder [4], while paying due attention to the amount of contribution from each individual study.

The problem of “enrichment design” in bipolar maintenance pharmacotherapy studies

The appraisal of the impact of study limitations in the NMA of the maintenance pharmacotherapy of bipolar disorder presents an additional interesting feature that renders this assessment even more challenging.

Bipolar disorder is a psychiatric disease in which patients typically show recurrent episodes of both manic and depressive episodes. While acute treatment is aimed at treating the acutely manic or depressive symptoms, long-term maintenance treatment is usually necessary to minimise the risk of recurrence of both manic and depressive episodes. Bipolar patients recruited into maintenance or prophylactic studies are usually in an euthymic phase, without acute symptoms. In some of these clinical trials, however, only the participants who had achieved remission of the index acute manic or depressive episode by treatment with a certain drug were included in the maintenance phase of the trial and then were randomised to continue the same drug or switch to another active drug (or placebo). Such a study design is called ‘enrichment design’, as it is ‘enriched’ by patients whose acute manic or depressive episode had responded to the drug used in the acute phase.

This study design has many limitations [5]. In particular, its results will tend to favour the drug that was effective in the acute phase mainly in the prevention of future episodes of the same polarity as the index episode and not necessarily in the prevention of episodes of the opposite polarity. The risk of bias due to the enrichment design therefore has a direction. For example, if a study included only those who had remitted from a manic episode on drug X and randomised them to continue on drug X or to switch to drug Y in order to compare these interventions’ efficacy in preventing a new manic or depressive episode, it is easy to foresee that such patients’ future manic episodes would be relatively responsive to drug X but possibly not their depressive episodes. On the other hand, drug Y is clearly not favoured in any direction as the patients had been originally selected as responders to drug X.

In the present article we use a published NMA as a working example and present a transparent and systematic method to assess how study limitations of individual randomised controlled trials (RCTs), including those due to the enrichment design, impact on the quality of evidence in the NMA. In NMA, it is almost certain that confidence in estimates will vary from comparison to comparison. We therefore essayed to appraise the quality of evidence for each comparison contained in the network. In the following we will illustrate how study limitations without direction (i.e. risks of bias usually assessed according to the Cochrane Handbook) and then those with direction (i.e. risk of bias due to the enrichment design) can be quantitatively summarised and evaluated to characterise each network estimate.

Methods

Materials

The NMA in question represents a systematic review of randomised controlled trials that compared active treatments for bipolar disorder (or placebo), either as monotherapy or as add-on treatment, for at least 12 weeks [4]. The primary outcome was the number of participants with recurrence of any mood episode this primary outcome was a combination of two secondary outcomes, namely the number of participants with recurrence of a manic episode and those with recurrence of a depressive episode. All in all we identified and included 33 randomised controlled trials that examined 17 maintenance pharmacotherapies for bipolar disorder in 6846 participants. Figure 1 shows the network formed by the identified comparisons in this NMA. We conducted a random-effects network meta-analysis within a Bayesian framework using Markov chain Monte Carlo in OpenBUGS 3.2.2. [6].

Fig. 1
figure 1

Network of eligible comparisons in the multiple-treatment meta-analysis for any mood episode relapse. Each node (circle) corresponds to a drug included in the analyses, with the size proportional to the number of participants assigned to that drug. Each line represents different comparisons between drugs, with the width of the line proportional to the number of trials comparing each pair of treatments. ARP aripiprazole, CBZ carbamazepine, FLX fluoxetine, IMP imipramine, LIT lithium, LTG lamotrigine, OLZ olanzapine, OXC oxcarbazepine, PAL paliperidone, PLB placebo, QTP quetiapine, RisLAI risperidone long-acting injection, VPA valproate

Assessment of risk of bias of each study and of each direct comparison

Two assessors rated the risk of bias (RoB) of each RCT according to the Cochrane Handbook risk of bias tool [7]. The RoB examines the key methodological issues in a randomised trial, such as generation of random sequence, concealment of allocation, blinding of participants, blinding of therapists, blinding of outcome assessment, incomplete outcome data, and selective outcome reporting. We also assessed whether the definitions of the mood episode relapse or recurrence were explicit/operationalised or not in the primary studies, and the sponsorship bias. We rated an item at unclear risk of bias when we did not find sufficient information to judge it at either high or low risk.

Then we made a summary evaluation of RoB for each included study according to the following categories:

  • Low risk of bias: there is no item rated at high risk among the nine items listed above.

  • Moderate risk of bias: there is one item rated at high risk.

  • High risk of bias: there are two or more items rated at high risk.

We examined the validity of this classification by pooling and comparing RR for studies rated as low, moderate or high risk of bias in a comparison if this comparison had an enough number of included trials to enable such validation.

After making a summary evaluation of RoB for each study, we made a similar evaluation of RoB for each direct comparison. When studies rated at different risks of bias were pooled, we made a summary evaluation by taking into account the weight that each study is given in pooling the studies into one direct comparison estimate as follows:

  • Low risk of bias: all the included studies were rated as low risk of bias.

  • Moderate risk of bias: all the studies were rated as moderate or low risk of bias; or there was one study rated as high risk of bias but this study contributed less than one quarter of the pooled sample.

  • High risk of bias: there are two or more studies rated at high risk; or one major study at high risk of bias made a substantial contribution.

The above method of summarising RoBs of various domains into RoB of a study and then summarising study RoBs into RoB of a comparison is admittedly to a certain extent arbitrary. However, it must be noted that we can use the same logic and calculations, as we demonstrate below, to synthesise these characteristics at the level of each pairwise comparison into those at the level of each network estimate. In the following we shall therefore use the definitions above to illustrate our method.

Assessment of ‘enrichment design’ for each study and for each direct comparison

We also evaluated whether each study used the enrichment design in relation with the polarity of the mood episode. The influence of the enrichment design was assessed separately for the two secondary outcomes: prevention of depressive episodes and prevention of manic episodes. Participants were considered to be enriched for a certain drug for depressive episode relapse (depressive enrichment) when they had been recruited at an acute depressive episode and investigated for the depressive episode relapse after being stabilised by that drug, and participants were considered to be enriched for a drug for manic episode relapse (manic enrichment) when they had been recruited at an acute manic episode and investigated for the manic episode relapse after being stabilized by that drug.

We first calculated the percentages of both depressive and manic enrichment for each study according to the number of participants in acute depressive or manic episode at recruitment, and then we estimated the corresponding percentages for each direct comparison consisting of one or more studies with consideration of the direction of enrichment for each study. For example, if a direct comparison A vs B consisted of two studies, one of which (n = 100) did not use the enrichment design but the other (n = 200) recruited patients at their depressive episodes and treated them with drug A, then this direct comparison would have 67 % (200/300) of participants enriched for depressive relapse in favour of drug A, 33 % not enriched for depressive relapse and 100 % not enriched for manic relapse.

Using the contribution matrix to quantify the influence of RoB and of enrichment design in each network estimate

We used a recently developed tool for NMA, called the contribution matrix, that quantifies how much each direct comparison in the network contributes to each network estimate in the NMA [8, 9].

Let’s take a simple, triangular network ABC. We first calculate the direct estimate comparing A vs B, A vs C and B vs C by pooling trials comparing A vs B, A vs C, and B vs C, respectively. We denote these as DAB, DAC and DBC. In the NMA of the full triangle, the mixed or network estimate comparing A vs B comes from the direct comparison DAB and the indirect comparison IAB consisting of DBC and DCA via C. For the simple situation in which each of the direct estimates has the same variance, the network estimate NAB is (2*DAB + (DAC−DBC))/3. Thus, for the mixed estimate (or also called network estimate) NAB, the three direct estimates DAB, DAC and DBC makes contributions of 50, 25 and 25 %, respectively.

When the network structure is complex and when variances are not equal, calculating the contribution of each direct estimate to each network estimate in the NMA is more complicated. In general more weight is given to direct comparisons with more precision and to those that are more central to the network and thus contribute to more indirect comparisons. Using the netweight command in Stata [10], we calculated the contribution matrix showing contributions from each direct comparison to the network comparisons. The weight that each direct comparison contributes to the network estimates is a combination of the variance of the direct comparison and the network structure: a comparison with much direct information not only contributes much to the network estimate of that comparison but also is more influential on its neighboring comparisons than its remotely placed comparisons, and a comparison for which little direct evidence exists benefits most from the rest of the network. Using netweight, Footnote 1 the percentage contribution of each direct comparison to each network estimate is summarised in a matrix with rows representing network estimates and columns representing the available direct comparison in the network.

In order to characterize the RoB of each network estimate, we multiplied the contributions from direct comparisons at low, moderate or high risk of bias, respectively, by the contribution percentage that each direct estimate is making to the network estimate. This calculation provided the percentage of contributions from direct estimates rated at low, moderate or high risk of bias, respectively, to each network estimate.

In order to quantify the contribution from enrichment design to each network estimate, we multiplied the percentage of enrichment for each direct comparison by the contribution percentage that each direct estimate is making to the network estimate. For a particular network estimate of A vs B, this calculation provided the percentage of contributions from enriched studies favouring A, those favouring B, those dis-favouring A (i.e. favouring another drug C over A), those dis-favouring B, and those that involve neither A nor B (enrichment of unknown direction). The remaining came from non-enriched studies. We summed up the percentage of contributions from studies favouring A and those dis-favouring B as the percentage of enrichment favouring A. In the same manner, the percentage of enrichment favouring B was calculated by summing up the percentage of contributions from studies favouring B and those dis-favouring A.

Results

RoB of network estimates

Table 1 lists RoB for each individual study, and the summary assessment of RoB for each direct comparison, following the general rules as described in the methods. Placebo vs lithium was the only comparison where we had an enough number of trials at high, moderate or low risks of bias to compare the effect estimates for the same underlying true effect. Pooled estimates of lithium over placebo in prevention of any mood episode for studies assesses as being at high, moderate and low risks of bias were 0.58 (95 % CI 0.47–0.71), 0.60 (0.52–0.69) and 0.80 (0.54–1.19) in the theoretically expected ascending order, thus validating our assessment of RoB.

Table 1 Risk of bias assessments for each individual study and for each direct comparison against placebo

Table 2 represents the contribution matrix of each direct comparison to network estimates. Summating percentage contributions from direct estimates (Table 2) at low, moderate or high RoB according to Table 1, we obtain Table 3, which shows the percentage of contributions from direct comparisons at high, moderate or low risks of bias to each network estimate.

Table 2 Contribution matrix for any mood episode relapse (the complete contribution matrix is shown on pp. 84–85 of the Appendix in Miura et al. [4])
Table 3 Contribution of risks of bias of direct estimates to network estimates

For example, 0.2, 22.5 and 77.6 % of the contributions to the network estimate for placebo vs lithium in preventing any mood episode come from direct comparisons with low, moderate and high, respectively, risks of bias. Figure 2 graphically presents the respective contributions for major comparisons in the network.

Fig. 2
figure 2

Contributions from studies at high, moderate or low risk of bias to RR to prevent any mood episodes. ARP aripiprazole, CBZ carbamazepine, FLX fluoxetine, IMP imipramine, LIT lithium, LTG lamotrigine, OLZ olanzapine, OXC oxcarbazepine, PAL paliperidone, PLB placebo, QTP quetiapine, RisLAI risperidone long-acting injection, VPA valproate (Figure dapted from p. 98 of the Appendix in Miura et al. [4])

Thus the network estimate of efficacy of lithium over placebo to prevent any mood episode was based nearly 80 % on studies at high risk of bias and nearly 20 % on studies at moderate risk of bias. This estimate would then be considered quite likely to be biased, either in the direction of under- or over-estimation.

Contribution of the enrichment design to network estimates

Table 4 shows the percentage of enriched participants for each direct comparison.

Table 4 Percentage of enriched participants for each direct comparison

Multiplying Table 4 by the contribution matrix for depressive episode relapse and that for mania episode relapse (Table 2), we obtain Table 5, which shows the percentage of contributions of the enrichment design to network estimates. For example, the NMA estimate of efficacy of placebo versus lithium in preventing depressive episode relapses receives 12.1 % of contributions from studies favouring lithium, 10.5 % from studies favouring placebo, 0.1 % from studies with enrichment design whose direction could not be determined, and the remaining 77.3 % from non-enriched studies.

Table 5 Contributions from studies with enrichment design to mixed and indirect estimates

We graphically showed the percentages of contributions of enriched vs non-enriched studies to effect estimates of main comparisons against placebo in the network (Fig. 3).

Fig. 3
figure 3

Contributions from enriched vs non-enriched studies to RR to prevent depressive episodes. ARP aripiprazole, CBZ carbamazepine, FLX fluoxetine, IMP imipramine, LIT lithium, LTG lamotrigine, OLZ olanzapine, OXC oxcarbazepine, PAL paliperidone, PLB placebo, QTP quetiapine, RisLAI risperidone long-acting injection, VPA valproate (Figures adapted from p. 90 of the Appendix in Miura et al. [4])

Thus, the network estimate of the efficacy of lithium vs placebo to prevent a depressive episode received a small contribution from studies enriched in favour of lithium, and a similarly small contribution from studies enriched in favour of placebo but the bulk of the evidence was from non-enriched studies. By contrast, the network estimates of fluoxetine or lamotrigine in the prevention of depressive episodes received nearly half or more contribution from studies enriched in favour of the active drugs: it is quite possible that the network estimates for these drugs are overestimated.

Discussion

We have demonstrated how to appraise the impact of study limitations of included studies on each estimate obtained in the NMA according to the GRADE system in a transparent and quantitative manner, first in the case of standard risks of bias as assessed with the Cochrane method and then also in the case of study limitations which have clear directions and have complex repercussion in the network.

The GRADE framework has been developed to provide a common, sensible method to assess quality of evidence and the strength of recommendation, and successfully applied to conventional pair-wise meta-analyses and clinical guidelines. However, it is difficult to apply the GRADE to NMAs mainly because of the complexity of NMAs. For, in NMA, risk of bias for mixed or network estimates are hard to evaluate, especially in a large network, because mixed estimates are calculated from both direct and indirect estimates with different contributions.

With netweight, a command for NMA in STATA [8], we can obtain the contribution matrix showing contributions from each direct comparison to the network estimates even in a large network like our example, and then we can calculate the composition of each level of risk of bias in network estimates quantitatively. We have demonstrated and exemplified that this method, first presented by Salanti et al. [3], is flexible enough to accommodate other sources of bias, including even those which have directions such as the enrichment design in our case.

Admittedly assessments of RoB and GRADE contain strong elements of judgment. Our endeavors represent quantification of these judgments in a reasonable and explicit way and represents important advance in making these judgments more transparent to consumers of evidence (patients, clinicians and policy makers). However we must remember that in essence they are attempts at quantification of in part qualitative statements.

One important consideration when downgrading for study limitation is whether actually studies at high risk of bias give materially different results from those at low risk of bias. If the disagreement is significant, researcher might choose to base their conclusions only on studies at low risk of bias. When both sources of evidence are in agreement, some reviewers might be reluctant to downgrade for study limitations. When disagreement is not substantial and yet not negligible, as it is the case in our example, appropriate statistical methodology should be applied to quantify the potential impact of those high risk of bias studies. In order to examine if studies rated at high RoB do in fact differ or not differ in effect estimates from those rated at low RoB, one solution might be to run subgroup NMA (or meta-regression) to compare the results among those with high RoB and those with low RoB. Others may argue however that, comparing two scenarios where, in one case, all high quality studies provide similar results and, in another case, half are high quality and half are low quality yet both provide similar results, the rating for the resultant meta-analytic results should nonetheless be higher for the former than for the latter. In practice, few network meta-analyses would have enough power to detect material differences between high and low risk of bias studies, so that the question about comparability of results between low and high risk of bias studies has to be answered by large-scale empirical studies [11, 12]. These studies have provided evidence that some risk of bias components might be important when the outcome is not mortality.

Netweight can calculate contributions of each direct comparison to the entire network, and therefore the ranking of treatments. The present paper focused on the evaluation of the confidence placed on pairwise treatment effects estimated via NMA rather than treatment ranking. Although the reporting of treatments’ ranking has become increasingly popular and can be clinically useful, it is only an auxiliary output and researchers are warned against consideration of the ranking in isolation from the effect sizes. We therefore think that it is clinically more meaningful and important to evaluate the pairwise effect sizes rather than globally assess the quality of the network evidence as a whole.

In future attempts to apply the GRADE system to NMAs, a systematic and quantitative approach to evaluating how study limitations of individual studies contribute to each network estimate is recommended and should also be endorsed by scientific journals across the field of evidence synthesis.

Notes

  1. The STATA command will be in the form of netweight effect_size SE_of_effect_size treatment1 treatment2   where each row in the dataset represents the effect size and its standard error for a study comparing treatment1 and treatment2. For more details, please see [8] and [10].

References

  1. Cipriani A, Higgins JP, Geddes JR, Salanti G. Conceptual and technical challenges in network meta-analysis. Ann Intern Med. 2013;159(2):130–7.

    Article  PubMed  Google Scholar 

  2. Puhan MA, Schunemann HJ, Murad MH, Li T, Brignardello-Petersen R, Singh JA, Kessels AG, Guyatt GH, Group GW. A GRADE Working Group approach for rating the quality of treatment effect estimates from network meta-analysis. BMJ. 2014;349:g5630.

    Article  PubMed  Google Scholar 

  3. Salanti G, Del Giovane C, Chaimani A, Caldwell DM, Higgins JP. Evaluating the quality of evidence from a network meta-analysis. PLoS ONE. 2014;9(7):e99682.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Miura T, Noma H, Furukawa TA, Mitsuyasu H, Tanaka S, Stockton S, Salanti G, Motomura K, Shimano-Katsuki S, Leucht S, et al. Comparative efficacy and tolerability of pharmacological treatments in the maintenance treatment of bipolar disorder: a network meta-analysis. Lancet Psychiatry. 2014;1(5):351–9.

    Article  PubMed  Google Scholar 

  5. Cipriani A, Barbui C, Rendell J, Geddes JR. Clinical and regulatory implications of active run-in phases in long-term studies for bipolar disorder. Acta Psychiatr Scand. 2014;129:328–42.

    Article  CAS  PubMed  Google Scholar 

  6. Lunn D, Spiegelhalter D, Thomas A, Best N. The BUGS project: evolution, critique and future directions. Stat Med. 2009;28(25):3049–67.

    Article  PubMed  Google Scholar 

  7. Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane collaboration’s tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Chaimani A, Higgins JP, Mavridis D, Spyridonos P, Salanti G. Graphical tools for network meta-analysis in STATA. PLoS ONE. 2013;8(10):e76654.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  9. Krahn U, Binder H, Konig J. A graphical tool for locating inconsistency in network meta-analyses. BMC Med Res Methodol. 2013;13:35.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Chaimani A, Salanti G. Visualizing assumptions and results in network meta-analysis: the network graphs package. Stata J. 2015;15(4):905–50.

    Google Scholar 

  11. Wood L, Egger M, Gluud LL, Schulz KF, Juni P, Altman DG, Gluud C, Martin RM, Wood AJ, Sterne JA. Empirical evidence of bias in treatment effect estimates in controlled trials with different interventions and outcomes: meta-epidemiological study. BMJ. 2008;336(7644):601–5.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Chaimani A, Vasiliadis HS, Pandis N, Schmid CH, Welton NJ, Salanti G. Effects of study precision and risk of bias in networks of interventions: a network meta-epidemiological study. Int J Epidemiol. 2013;42(4):1120–31.

    Article  PubMed  Google Scholar 

Download references

Authors’ contributions

TAF and TM conceived the study. TM, HM and SK extracted the data. TAF, TM, AC, HN and GS analysed the data. TAF, TM SL, AC and GS interpreted the data. TAF wrote the first draft manuscript, and all the authors revised it critically for important intellectual content and have given approval to its final version. All authors read and approved the final manuscript.

Acknowledgements

This study was supported in part by Grant-in-Aid for Challenging Exploratory Research from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) and Japan Society for the Promotion of Science (JSPS) to TAF and HN (No. 26670314). AC is supported by the NIHR Oxford cognitive health Clinical Research Facility. The funder had no role in the study design, the collection, analysis and interpretation of data, the writing of the report, or the decision to submit the article for publication.

Competing interests

TAF has received lecture fees from Eli Lilly, Meiji, Mochida, MSD, Pfizer and Tanabe-Mitsubishi, and consultancy fees from Sekisui and Takeda Science Foundation. He has received royalties from Igaku-Shoin, Seiwa-Shoten and Nihon Bunka Kagaku-sha. TM has received honoraria for lecture from GlaxoSmithKline, Astellas, Eli Lilly Japan, Meiji Seika Pharma, Otsuka, Pfizer, Dainippon Sumitomo, Shionogi, Taisho Toyama and Mochida. He has received royalties from the Japan Council for Quality Health Care, Medical Review, and Medical Sciences International. SL has received honoraria for lectures from Abbvie, Astra Zeneca, BristolMyersSquibb, ICON, Eli Lilly, Janssen, Johnson & Johnson, Roche, SanofiAventis, Lundbeck and Pfizer; for consulting/advisory boards from Roche, EliLilly, Medavante, BristolMyersSquibb, Alkermes, Janssen, Johnson & Johnson and Lundbeck. Eli Lilly has provided medication for a study with SL as primary investigator. HN has received a lecture fee from Boehringer Ingelheim. HM has received honoraria from Mitsubishi Tanabe, Meiji Seika Pharma, GlaxoSmithKline, Pfizer, MSD, Astellas, Otsuka and Dainippon Sumitomo. SK has received honoraria from Pfizer, Janssen, GlaxoSmithKline, Eli Lilly Japan, Eisai, Meiji Seika Pharma, Taisho Toyama, Astellas, Ono, Mochida, Otsuka, Abott Japan, Shionogi, Dainippon Sumitomo, Nippon-Chemifa, Yoshitomiyakuhin, and MSD; and grant/research supports from Pfizer, Ono, GlaxoSmithKline, Astellas, Janssen, Yoshitomiyakuhin, Eli Lilly Japan, Otsuka, Mochida, Daiichi-Sankyo, Dainippon Sumitomo, Meiji Seika Pharma, Shionogi, and Eisai. All the other authors declare that they have no conflicts of interest.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Toshi A. Furukawa.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Furukawa, T.A., Miura, T., Chaimani, A. et al. Using the contribution matrix to evaluate complex study limitations in a network meta-analysis: a case study of bipolar maintenance pharmacotherapy review. BMC Res Notes 9, 218 (2016). https://0-doi-org.brum.beds.ac.uk/10.1186/s13104-016-2019-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13104-016-2019-1

Keywords