Back to Journals » Clinical Epidemiology » Volume 9

Confounding in observational studies based on large health care databases: problems and potential solutions – a primer for the clinician

Authors Nørgaard M , Ehrenstein V, Vandenbroucke JP

Received 10 December 2016

Accepted for publication 15 February 2017

Published 28 March 2017 Volume 2017:9 Pages 185—193

DOI https://doi.org/10.2147/CLEP.S129879

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Professor Irene Petersen



Mette Nørgaard,1 Vera Ehrenstein,1 Jan P Vandenbroucke1–3

1Department of Clinical Epidemiology, Aarhus University Hospital, Aarhus, Denmark; 2Department of Clinical Epidemiology, Leiden University Medical Center, The Netherlands; 3Department of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, United Kingdom

Abstract: Population-based health care databases are a valuable tool for observational studies as they reflect daily medical practice for large and representative populations. A constant challenge in observational designs is, however, to rule out confounding, and the value of these databases for a given study question accordingly depends on completeness and validity of the information on confounding factors. In this article, we describe the types of potential confounding factors typically lacking in large health care databases and suggest strategies for confounding control when data on important confounders are unavailable. Using Danish health care databases as examples, we present the use of proxy measures for important confounders and the use of external adjustment. We also briefly discuss the potential value of active comparators, high-dimensional propensity scores, self-controlled designs, pseudorandomization, and the use of positive or negative controls.

Keywords: observational studies, health care databases, confounding

Introduction

Observational studies based on large existing health care databases have a well-established role in clinical research. Nevertheless, there are controversies regarding the validity of observational studies based on such databases. Among limitations is the fact that the data collection methods are predetermined and not controlled by the researcher. Misclassification constitutes a frequent limitation of registry-based research. In addition, as with any type of nonrandomized epidemiological research, the absence of confounding cannot be assumed in studies of associations between a given exposure and a given outcome using large databases. The value of these population-based databases for interpreting observed associations as causal will therefore also depend on how effectively confounding can be controlled.

Confounding is the situation in which the difference in the risk of the outcome (or lack thereof) between exposed and unexposed can be explained entirely or partly by imbalance of other causes of the outcome in the contrasted groups.1 Ideally, to directly observe a causal (ie, confounding-free) exposure–outcome relation, we would like to examine the occurrence of a given outcome in the same group of people over the same period of time under two contrasted exposure conditions. In reality, this is impossible, as for each person only the outcome under one exposure condition is observed; the outcome under the counterfactual exposure condition is not observed. Thus, one will need to find ways to control confounding or at least assess its potential impact.

When an exposure is allocated randomly, as in randomized controlled trials, any association between a given prognostic variable and the exposure will be random. Accordingly, if the trial is adequately powered and well designed, randomization will, on average, control both known (measured and unmeasured) and unmeasured confounders, be it that there is no guarantee of ideal balance in any single study.2 Randomized trials often have narrow inclusion criteria3 and therefore tend to enroll a selection of patients with only one diagnosis, with no concomitant therapies, neither very young nor old, and with a reasonable prognosis.4,5 In contrast, large population-based health care databases reflect the entire daily clinical practice for large and representative populations. Yet, the ability of a given strategy to control confounding in studies based on these databases depends on completeness and validity of the recorded information on confounding factors.

The aims of this article are to describe the types of potential confounding factors about which data are not typically recorded in large medical databases and to present potential strategies for dealing with such confounding. Therefore, we will also briefly discuss self-controlled designs, the use of external adjustment, pseudorandomization, high-dimensional propensity scores, active comparators, and the use of positive or negative controls – all of which are attempts to overcome the potential lack of availability of information on confounding factors. We thereby intend to give a quick overview that can serve as a primer for clinicians.

Many of our examples stem from the use of Danish population-based health care databases. However, we think that similar concerns and similar solutions will exist in other databases in other countries. Database research is becoming increasingly important for clinical research worldwide. Denmark has the experience of a long tradition for establishing and maintaining population-based medical registries and databases. Possibility of linkage of these data sources using a 10-digit personal identification number (the civil personal register number), which follows each Dane from cradle to grave, creates a valuable tool for observational research, with an added benefit of the underlying universal access to health care, making selection bias negligible in many situations.6

Which confounders are recorded?

Whether confounding is a potential threat to the validity in each specific observational study depends on the study question and the data availability, as most strategies to cope with potential confounding require that we are aware of the confounding variables and able to measure them (Table 1).

Table 1 Different types of confounders and potential solutions on how to control for these in observational studies based on health care databases

To obtain an overview of the potential confounders, a first and time-honored strategy is to start with a list of variables that are known causes of the outcome, based on our knowledge of the existing literature.7 Next, we can remove the variables that are not associated with the exposure. We should also remove variables that are causes of the outcome but lie on the exposure–outcome causal pathway because it would be wrong to treat them as confounders.1 Then, we can categorize the remaining potential confounders in our list into variables that are measured in the data and variables that are not measured but are measurable in a substudy or in another setting. Finally, we may have variables that are not measured and on which we have no information, as well as confounders that are unknown at present. In most straightforward studies of the effect of an exposure on an outcome, this strategy of selecting confounders will work well. In case the problem is more intricate, however, with complex exposures (eg, exposures that are strongly related to background social conditions) or in complex study situations (eg, repeated measurements and time dependency of exposures and confounders), a possibility is to use directed acyclic graphs (DAGs) to elicit potential causal pathways. An explanation of how to use DAGs is outside the scope of this article, but excellent introductions are widely available.8,9

For instance, if we want to examine the evolution of 30-day survival following a first myocardial infarction over several decades, the list of potential confounders could be short. It may be sufficient to take only sex and age into account, and these variables are easily accessible in all Danish registries.10 If we are going to examine whether the use of statins protects against cerebral glioma, it may be necessary to adjust for several potential confounding factors, such as diabetes, a history of stroke, exposure to endogenous sex hormones, exposure to ionizing radiation, use of various drugs, and lifestyle factors and socioeconomic status.11 In the latter case, not all potential confounders are available in most of the health care databases.

In administrative registries, such as the Danish National Patient Registry (DNPR), we have access to hospital diagnoses and procedures12 while data on lifestyle factors are sparse.13 For example, information on smoking is usually not well recorded in all patient groups in the typical administrative health registries.13 In some cases, a diagnosis of chronic obstructive lung disease (COPD) can be considered as a proxy measure of smoking. However, although a diagnosis of COPD may be a good marker of previous smoking, it may be an imprecise marker of current smoking status as most of the patients are encouraged to quit smoking when they receive COPD diagnosis.14 Instead, we can consider retrieving information on smoking status from medical charts on a subpopulation or we can use information from, eg, health surveys on how smoking is likely distributed in the exposure groups and how strongly it is associated with the outcome. We can then take this information into account using external adjustment.15

If the external information or information from a subgroup is not easily obtainable, sensitivity analysis might be helpful to assess the potential impact that an unmeasured confounder could have on the study. Svensson et al16 used Danish registries to examine the association between vagotomy and subsequent risk of Parkinson’s disease. After 20 years of follow-up, the adjusted hazard ratio of Parkinson’s disease in vagotomized persons compared with the general population was 0.53 (95% CI, 0.28–0.99). Because smoking is associated with an increased risk of peptic ulcer (the underlying indication for vagotomy)17 and, at the same time, may protect against Parkinson’s disease,18 lack of data on smoking was of concern, as smoking, rather than vagotomy, could be behind the observed protective association. First, the authors considered using COPD as a proxy measure for smoking. Controlling for a diagnosis of COPD would, however, only to some degree control the effect of smoking, since the prevalence of COPD is much lower than the prevalence of smoking (2.4% of patients who underwent truncal vagotomy had a COPD diagnosis compared to 1.2% of the comparison cohort), suggesting that most of the confounding would remain uncontrolled. To address the potential residual confounding by smoking, the authors performed a sensitivity analysis for unmeasured confounding19 in which they assumed that the relative risk of Parkinson’s disease in smokers was 0.53 based on data from a US study20 and that the proportion of smokers in the unexposed cohort (ie, in the general population of Denmark) was 60% in the 1970s.21 If 85% of the vagotomized patients were assumed to be smokers, the corrected adjusted hazard ratio was 0.66. This sensitivity analysis demonstrated that although smoking was likely to confound the association between vagotomy and Parkinson’s disease, differences in smoking prevalence could only explain a minor part of the protective effect observed after vagotomy.

Disease severity, often an important confounder, is not consistently recorded in medical databases. If we are comparing the effect of a certain drug in patients with a specific disease of interest with the outcome in patients with the same disease who do not use the drug or use a different one, then severity of underlying disease could be the indicator of the treatment choices. The untreated group may include both patients with very mild disease who do not need any treatment and patients with treatment contraindications for treatment who may be severely ill.22 If severity measures are not available, proxy measures, such as use of health care services and use of certain medications, should be considered, and these proxies may also be combined. Actually, the longitudinal data in the large population-based medical databases can be understood as a set of proxies that indirectly describe the health status of a given patient.23 High-dimensional propensity score adjustment is a technique initially developed to empirically identify and select a large number of covariates from routine health care data, which when combined, allow for high-dimensional proxy adjustment that would reduce residual confounding.23 By using all available information and combining variables into a propensity score, the hope is to catch sufficient amount of information to remove the effect of confounding, including unknown confounding. However, like for other methods of statistical adjustment,24 we have no guarantee that propensity score methods will remove unknown confounding. For example, a British study compared the results of using propensity score methods to study the effect of spironolactone treatment on mortality in patients with heart failure in an observational setting to those of a randomized controlled trial.25 This study demonstrated that the propensity score analyses were unable to capture the effect of severity of the underlying illness (confounding by indication), and thus, the propensity score analyses provided biased results. Although high-dimensional propensity score adjustment may not always make a big difference in the ability to control confounding if major confounders are measured,26 this method seems, in some cases, more effective than simple confounder adjustment by variables selected based on clinical reasoning, even in large databases as in the Nordic countries.27 Also when compared with conventional propensity score methods, high-dimensional propensity score may better control confounding by indication in pharmacoepidemiological studies.28 Although propensity score methods are often recommended over standard regression methods when the outcome is rare,29 one needs to be careful in using high-dimensional propensity score without including investigator-selected variables in case of rare outcomes. Otherwise, the results may be biased.30

In some cases, the list of potential confounders makes it clear that some of the important confounding factors in the study are not measurable not even by combining a large number of proxies. In such cases, other types of study designs or analysis are needed to address confounding.

Self-controlled designs

Since the early 1990s, several designs, such as the case–crossover design31 and the self-controlled case series,32 have been introduced, where the comparison is not between exposed and unexposed persons but between time spent under exposed and unexposed conditions, which are compared within the same subjects.33 These designs, which are largely similar with subtle differences, include only cases of the outcome of interest and compare their exposure status in a relevant period for potentially causing the outcome with the exposure in a different period.31 Since the same subjects are contributing time as both exposed and unexposed, confounding by permanent personal traits is absent. A major limitation of self-controlled designs is, however, that they are applicable in a narrow set of situation in which the effect of exposure is transient and the onset of the outcome is acute.31,34 If exposure prevalence and potential confounding factors vary over time, this time variability has to be taken into account in the study design to avoid spurious associations. This can be done by including an additional control group consisting of persons without the outcome and comparing their exposure in two different time periods by using a case–time–control design.35

As an example, Lund et al36 conducted a cohort study examining incidence and risk factors for venous thromboembolism (VTE) among lymphoma patients. Previous studies had suggested that chemotherapy and the use of central venous catheter increase VTE risk in high-grade lymphoma patients, yet these studies failed to adequately account for the time-dependent nature of cancer treatments. To examine whether the lymphoma treatment had transient impact on VTE risk, Lund et al included a self-controlled design in which the period of relevance for being able to cause the outcome (the primary hazard period) was defined as the 30 days prior to the VTE diagnosis date. The comparison period was the 30-day period from 90 to 61 days prior to the start of the hazard period. This approach demonstrated that the risk of VTE transiently increased almost sevenfold after the placement of a central venous catheter and almost fourfold after radiation therapy.36

Active comparator

If a self-controlled design is not possible, another way of addressing unmeasurable confounding by indication or by severity in pharmacoepidemiological studies could be to include an active comparator.22 As mentioned earlier, an untreated group of patients with a certain disease may have different characteristics than a treated group and the untreated group may, therefore, not at all represent a fair comparison group. If another drug is used for the disease of interest, then we can expect similarity in disease severity if these drugs are exchangeable. Accordingly, we can compare the effect between patients exposed to the drug of interest and the patients exposed to the active comparator. Although the use of active comparators will not control confounding related to the comparison between users and nonusers of a given treatment among patients with a given disease (which may be of interest), it will assist in the assessment of the potential magnitude of confounding by indication. The method of looking for an active comparator can additionally be combined with other methods, such as restriction of types of patients and propensity score adjustment.37

As an example, Thomsen et al38 examined the risk of acute pancreatitis in patients treated with incretin-based drugs. Incretin-based therapies are oral antihyperglycemic drugs used for type 2 diabetes. They exert their effect by augmenting glucose-stimulated insulin secretion from the pancreas, and this stimulation of the pancreas may increase the risk of acute pancreatitis. Because the underlying diabetes and associated risk factors are also associated with acute pancreatitis, the authors suspected that previous findings of an elevated risk could at least partly be due to incomplete control of confounding. Therefore, they examined the risk of acute pancreatitis in diabetic patients using incretin-based therapies compared to the risk in users of other antihyperglycemic therapies (the active comparator). The adjusted odds ratio of acute pancreatitis in incretin-based therapy compared with other antihyperglycemic therapies while adjusting for diabetes duration and complications was 0.97 (95% CI, 0.76–1.23), suggesting that the use of incretin-based drugs does not increase the risk of acute pancreatitis.38

Pseudorandomization

Under certain conditions, the research question allows us to use a design that mimics randomization. Below, we will describe the use of instrumental variables, Mendelian randomization, and regression discontinuity design, which are methods that, in some instances, can be used to control unmeasured confounding.

Instrumental variable

Instrumental variables to control confounding have been used in econometrics for decades but may also be useful in epidemiological studies to control confounding.39 An instrumental variable is a factor that is associated with the exposure of interest (often a determinant of the exposure of interest), so that if we categorize the study population by different levels of the instrumental variable, then these categories will have different levels of the exposure of interest. However, a major condition is that the instrumental variable must not be directly associated with the outcome or be indirectly associated with the outcome through other variables than the exposure of interest.39,40 If these requirements are met and the risk of the study outcome varies between groups with different levels of the instrumental variable, then this variation can only be explained either by the difference in levels of the exposure of interest between groups or by chance. Since the instrumental variable is not related to the outcome, except through the exposure, even unknown confounding is removed, as is the case under randomization.40 In randomized trials, random treatment assignment is the instrumental variable in the intention-to-treat analysis.

The challenge is, however, to find an instrumental variable that has all the qualities described earlier. Table 2 presents some examples of instrumental variables, which have been used in recently published studies. In pharmacoepidemiological studies, a potential instrumental variable may be preference in drug choice by the treating physician or hospital. Differences in drugs that physicians prefer are ubiquitous, and physician preference, therefore, results in natural variation in treatment patterns. In addition, as the preference is measured based on previously treated patients, in theory, it should not be related to the outcome of the patient in the study. Still, differences in prescribing behavior may also reflect differences in case mix, although this seems to explain only a minor part of the variation in preference.41 Also, a physician with preference for prescribing the drug of interest may also have preference for prescribing other drugs that may affect the outcome.40 Even though the instrumental variable method intuitively seems promising, there has been some disillusion partly because of the difficulties in finding valid instrumental variables. Also, the variance in these studies is bigger than in conventional analyses and even large datasets may yield low-precision estimates.42 Nevertheless, instrumental variable analyses may be used to complement conventional analyses if confounding in the conventional analysis cannot be ruled out.42

Table 2 Examples of instrumental variables recently used in published studies


Abbreviations: NIV, noninvasive ventilation; NSAID, non-steroidal anti-inflammatory drug.

Mendelian randomization

Since the alleles at the time of gamete formation are assorted by a mechanism that can be seen as “random”, the distribution of genetic variants in a population is generally independent of environmental or behavioral factors later in life.43 These properties define an instrumental variable and can, in some instances, be used to provide a study design akin to a randomized design.

Several studies in Buckley et al have shown an association between elevated C-reactive protein and the risk of cardiovascular events with an estimated 60% increased risk for incident cardiovascular disease for C-reactive protein levels >3.0 mg/L compared to levels <1.0 mg/L.44 To examine whether C-reactive protein is merely a marker of severity of cardiovascular disease or actually is involved in its pathogenesis, Zacho et al45 used four independent cohorts of Caucasians of Danish descent and examined whether C-reactive protein polymorphisms were associated with the risk of ischemic heart disease and ischemic cerebrovascular disease. Polymorphisms in the C-reactive protein gene were associated with marked increases in C-reactive protein levels and thus with a theoretically predicted increase in the risk of ischemic vascular disease. However, these polymorphisms were not in themselves associated with an increased risk of ischemic vascular disease. Their finding suggested that the increased risk of ischemic vascular disease associated with higher plasma C-reactive protein levels observed in epidemiological studies probably does not represent a causal relation.45

Regression discontinuity design

This design may be used in any care setting where rules exist or new interventions are introduced that apply to people above or below a particular threshold of a continuously measured biomarker or other continuous health-related characteristics.46 The design is based on the assumption that a patient is assigned a specific treatment because the patient is above the defined threshold. However, since the measurement of biomarkers or other health-related characteristics is subject to random variation due to measurement error, sampling variability, and chance,46 patients just below the threshold and patients just above the threshold will be similar with respect to both observed and unobserved pretreatment characteristics. If the probability of the outcome is plotted against the level of the assignment variable, any effect of the intervention will present as a discontinuity of the outcome at the threshold level.47

In HIV patients, the decision to start life-prolonging antiretroviral therapy (ART) depends on the patients’ CD4 cell counts. In rural South Africa, in 2007–2011, patients were eligible for ART if their CD4 count was <200 cells/μL.46 By plotting the mortality rate by the CD4 count (Figure 1), Bor et al found a discontinuity at 200 cells/μL so that patients with CD4 counts just >200 cells/μL had higher mortality than patients with counts just <200 cells/μL. This strongly suggested that there may be a treatment benefit, which cannot be due to confounding, as patients just below and just above the treatment threshold are expected to have similar baseline variables.46

Figure 1 First CD4 count and mortality hazard rate in an HIV-positive population.


Notes: Predicted hazards are displayed as solid lines. Dashed line shows extrapolated prediction if all patients were treatment eligible at first CD4 count. Dots are hazards predicted for CD4 count bins of width 10 cells. Copyright © 2014 by Lippincott Williams & Wilkins. Figure originally published by Bor et al. Regression discontinuity designs in epidemiology: causal inference without randomized trials. Epidemiology 2014;25:729–737.46

The regression discontinuity design is relatively simple, is limited to situations with a threshold rule for intervention, and only measures local effects around the threshold for the intervention.48

Use of negative controls

If concerns regarding uncontrolled confounding in a specific study persist and none of the pseudorandomized designs are applicable, one could consider including an additional exposure group in which the exposure is not expected to be related to the outcome or an additional outcome that is not expected to be the effect of the exposure of interest.49 Using negative exposure, or outcome controls, does not control confounding, but it is a way to at least address the potential magnitude of uncontrolled confounding. The general problem is that we want to examine how an exposure A affects an outcome (Y), but in the analyses, we suspect residual confounding caused by a set of uncontrolled confounders (U). A negative control exposure (B) should be an exposure in which the distribution of U (the set of factors causing residual confounding) in those exposed to B is comparable to or at least have some overlap with the distribution in those exposed to A (the exposure of interest, Figure 2). Alternatively, we can use a negative control outcome that we do not expect to be related to the exposure A but is affected by a set of confounders comparable to those affecting the association between the exposure of interest A and the outcome Y.49

Figure 2 Causal diagram showing an ideal negative control exposure B for use in evaluating studies of the causal relationship between exposure A and outcome Y.


Notes: B should ideally have the same incoming arrows as A. U is the set of uncontrolled confounders. L is assumed measured and controlled for. Modified with permission from Lipsitch et al. Negative controls: a tool for detecting confounding and bias in observational studies.in Epidemiology 2010;21(3):383–388. https://www.ncbi.nlm.nih.gov/pubmed/20335814.49

As an example of a negative control situation, Jackson et al compared mortality in influenza-vaccinated persons with unvaccinated persons and used a negative control period to assess potential confounding. In their study, they found relative risks of death of 0.39 (95% CI, 0.33–0.47) before the influenza season, 0.56 (95% CI, 0.52–0.61) during the influenza season, and 0.74 (95% CI, 0.67–0.80) after the influenza season.50 Since we do not expect influenza vaccination to have any effect before the influenza season, the lower risk of death before influenza season strongly indicated that persons who had an influenza vaccine were healthier than the background population. Actually, this difference could explain a substantial part of the observed effect of the vaccine during the influenza season.

As a second example, in pharmacoepidemiological studies, former users of medicine may constitute a negative control exposure group. Johannesdottir et al51 examined the association between the use of glucocorticoids and the risk of VTE and found that current use of glucocorticoid was associated with a more than twofold increased incidence of VTE compared with nonusers. The study also included former use of glucocorticoids, and the fact that this group did not have an increased incidence of VTE strengthened the conclusion that the observed association was caused by a biological effect rather than uncontrolled confounding.

The use of “negative controls” is an example of the general idea of “triangulation” of research findings on which a recent review was written by Lawlor et al.52

Conclusion

Observational studies do not have the benefit of random treatment assignment, and therefore, uncontrolled confounding constitutes a potential serious validity concern. Such concern should not, however, discourage the use of observational studies. Measured confounders can be addressed in several ways through the design or analysis of the data, while unmeasured confounders can be addressed by proxy measures, external adjustment, or design measures. Problem of unmeasured confounders or unknown confounders can, in some instances, also be solved by pseudorandomized designs, such as instrumental variable, Mendelian randomization, and regression discontinuity designs.

Acknowledgments

This study was funded by the Program for Clinical Research Infrastructure (PROCRIN) established by the Lundbeck Foundation and the Novo Nordisk Foundation and administered by the Danish Regions.

Disclosure

The authors report no conflicts of interest in this work.

References

1.

Rothman KJ. Modern Epidemiology. 1st ed. Boston, MA: Little, Brown, and Company; 1986.

2.

Senn S. Seven myths of randomisation in clinical trials. Stat Med. 2013;32(9):1439–1450.

3.

Britton A, McKee M, Black N, McPherson K, Sanderson C, Bain C. Threats to applicability of randomised trials: exclusions and selective participation. J Health Serv Res Policy. 1999;4(2):112–121.

4.

Booth CM, Tannock IF. Randomised controlled trials and population-based observational research: partners in the evolution of medical evidence. Br J Cancer. 2014;110(3):551–555.

5.

Reyes C, Pottegard A, Schwarz P, et al. Real-life and RCT participants: alendronate users versus FITs’ trial eligibility criterion. Calcif Tissue Int. 2016;99(3):243–249.

6.

Frank L. Epidemiology. When an entire country is a cohort. Science. 2000;287(5462):2398–2399.

7.

Hulley SB, Cummings SR, Browner WS, Grady D, Hearst N, Newman TB. Designing Clinical Research. 2nd ed. Philadelphia: Lippincott Williams & Wilkins; 2001.

8.

VanderWeele TJ, Hernan MA, Robins JM. Causal directed acyclic graphs and the direction of unmeasured confounding bias. Epidemiology. 2008;19(5):720–728.

9.

Fleischer NL, Diez Roux AV. Using directed acyclic graphs to guide analyses of neighbourhood health effects: an introduction. J Epidemiol Community Health. 2008;62(9):842–846.

10.

Schmidt M, Jacobsen JB, Lash TL, Botker HE, Sorensen HT. 25 year trends in first time hospitalisation for acute myocardial infarction, subsequent short and long term mortality, and the prognostic impact of sex and comorbidity: a Danish nationwide cohort study. BMJ. 2012;344:e356.

11.

Gaist D, Andersen L, Hallas J, Sorensen HT, Schroder HD, Friis S. Use of statins and risk of glioma: a nationwide case-control study in Denmark. Br J Cancer. 2013;108(3):715–720.

12.

Schmidt M, Schmidt SA, Sandegaard JL, Ehrenstein V, Pedersen L, Sorensen HT. The Danish National Patient Registry: a review of content, data quality, and research potential. Clin Epidemiol. 2015;7:449–490.

13.

Sogaard M, Heide-Jorgensen U, Norgaard M, Johnsen SP, Thomsen RW. Evidence for the low recording of weight status and lifestyle risk factors in the Danish National Registry of Patients, 1999-2012. BMC Public Health. 2015;15:1320.

14.

Tottenborg SS, Thomsen RW, Nielsen H, Johnsen SP, Frausing HE, Lange P. Improving quality of care among COPD outpatients in Denmark 2008-2011. Clin Respir J. 2013;7(4):319–327.

15.

Sturmer T, Glynn RJ, Rothman KJ, Avorn J, Schneeweiss S. Adjustments for unmeasured confounders in pharmacoepidemiologic database studies using external information. Med Care. 2007;45(10 Supl 2):S158–S165.

16.

Svensson E, Horvath-Puho E, Thomsen RW, et al. Vagotomy and subsequent risk of Parkinson’s disease. Ann Neurol. 2015;78(4):522–529.

17.

Brenner H, Rothenbacher D, Bode G, Adler G. Relation of smoking and alcohol and coffee consumption to active Helicobacter pylori infection: cross sectional study. BMJ. 1997;315(7121):1489–1492.

18.

Hernan MA, Takkouche B, Caamano-Isorna F, Gestal-Otero JJ. A meta-analysis of coffee drinking, cigarette smoking, and the risk of Parkinson’s disease. Ann Neurol. 2002;52(3):276–284.

19.

Lash TL, Fox MP, Fink AK. Sensitivity Analysis for Unmeasured Confounding. Applying Quantitative Bias Analysis to Epidemiological Data. Oxford, UK: Springer Verlag; 2009.

20.

Powers KM, Kay DM, Factor SA, et al. Combined effects of smoking, coffee, and NSAIDs on Parkinson’s disease risk. Mov Disord. 2008;23(1):88–95.

21.

Kjøller MJK, Kamper-Jørgensen F. The Public Health Report, Denmark 2007. Copenhagen: 2007. Available from: http://www.si-folkesundhed.dk/Udgivelser/B%C3%B8ger%20og%20rapporter/2008/2897%20Folkesundhedsrapporten%202007.aspx?lang=en. Accessed February 15, 2017.

22.

Yoshida K, Solomon DH, Kim SC. Active-comparator design and new-user design in observational studies. Nat Rev Rheumatol. 2015;11(7):437–441.

23.

Schneeweiss S, Rassen JA, Glynn RJ, Avorn J, Mogun H, Brookhart MA. High-dimensional propensity score adjustment in studies of treatment effects using health care claims data. Epidemiology. 2009;20(4):512–522.

24.

Bosco JL, Silliman RA, Thwin SS, et al. A most stubborn bias: no adjustment method fully resolves confounding by indication in observational studies. J Clin Epidemiol. 2010;63(1):64–74.

25.

Freemantle N, Marston L, Walters K, Wood J, Reynolds MR, Petersen I. Making inferences on treatment effects from real world data: propensity scores, confounding by indication, and other perils for the unwary in observational research. BMJ. 2013;347:f6409.

26.

Toh S, Garcia Rodriguez LA, Hernan MA. Confounding adjustment via a semi-automated high-dimensional propensity score algorithm: an application to electronic medical records. Pharmacoepidemiol Drug Saf. 2011;20(8):849–857.

27.

Hallas J, Pottegard A. Performance of the high-dimensional propensity score in a Nordic healthcare model. Basic Clin Pharmacol Toxicol. 2017;120(3):312–317.

28.

Garbe E, Kloss S, Suling M, Pigeot I, Schneeweiss S. High-dimensional versus conventional propensity scores in a comparative effectiveness study of coxibs and reduced upper gastrointestinal complications. Eur J Clin Pharmacol. 2013;69(3):549–557.

29.

Williamson E, Morley R, Lucas A, Carpenter J. Propensity scores: from naive enthusiasm to intuitive understanding. Stat Methods Med Res. 2012;21(3):273–293.

30.

Patorno E, Glynn RJ, Hernandez-Diaz S, Liu J, Schneeweiss S. Studies with many covariates and few outcomes: selecting covariates and implementing propensity-score-based confounding adjustments. Epidemiology. 2014;25(2):268–278.

31.

Maclure M. The case-crossover design: a method for studying transient effects on the risk of acute events. Am J Epidemiol. 1991;133(2):144–153.

32.

Farrington CP. Relative incidence estimation from case series for vaccine safety evaluation. Biometrics. 1995;51(1):228–235.

33.

Petersen I, Douglas I, Whitaker H. Self controlled case series methods: an alternative to standard epidemiological study designs. BMJ. 2016;354:i4515.

34.

Whitaker HJ, Farrington CP, Spiessens B, Musonda P. Tutorial in biostatistics: the self-controlled case series method. Stat Med. 2006;25(10):1768–1797.

35.

Suissa S. The case-time-control design. Epidemiology. 1995;6(3):248–253.

36.

Lund JL, Ostgard LS, Prandoni P, Sorensen HT, de Nully BP. Incidence, determinants and the transient impact of cancer treatments on venous thromboembolism risk among lymphoma patients in Denmark. Thromb Res. 2015;136(5):917–923.

37.

Schneeweiss S, Patrick AR, Sturmer T, et al. Increasing levels of restriction in pharmacoepidemiologic database studies of elderly and comparison with randomized trial results. Med Care. 2007;45(10 Supl 2):S131–S142.

38.

Thomsen RW, Pedersen L, Moller N, Kahlert J, Beck-Nielsen H, Sorensen HT. Incretin-based therapy and risk of acute pancreatitis: a nationwide population-based case-control study. Diabetes Care. 2015;38(6):1089–1098.

39.

Greenland S. An introduction to instrumental variables for epidemiologists. Int J Epidemiol. 2000;29(4):722–729.

40.

Brookhart MA, Rassen JA, Schneeweiss S. Instrumental variable methods in comparative safety and effectiveness research. Pharmacoepidemiol Drug Saf. 2010;19(6):537–554.

41.

Boef AG, le Cessie S, Dekkers OM, et al. Physician’s prescribing preference as an instrumental variable: exploring assumptions using survey data. Epidemiology. 2016;27(2):276–283.

42.

Boef AG, van PJ, Arbous MS, et al. Physician’s preference-based instrumental variable analysis: is it valid and useful in a moderate-sized study? Epidemiology. 2014;25(6):923–927.

43.

Smith GD, Ebrahim S. Mendelian randomization: prospects, potentials, and limitations. Int J Epidemiol. 2004;33(1):30–42.

44.

Buckley DI, Fu R, Freeman M, Rogers K, Helfand M. C-reactive protein as a risk factor for coronary heart disease: a systematic review and meta-analyses for the U.S. Preventive Services Task Force. Ann Intern Med. 2009;151(7):483–495.

45.

Zacho J, Tybjaerg-Hansen A, Jensen JS, Grande P, Sillesen H, Nordestgaard BG. Genetically elevated C-reactive protein and ischemic vascular disease. N Engl J Med. 2008;359(18):1897–1908.

46.

Bor J, Moscoe E, Mutevedzi P, Newell ML, Barnighausen T. Regression discontinuity designs in epidemiology: causal inference without randomized trials. Epidemiology. 2014;25(5):729–737.

47.

O’Keeffe AG, Geneletti S, Baio G, Sharples LD, Nazareth I, Petersen I. Regression discontinuity designs: an approach to the evaluation of treatment efficacy in primary care using observational data. BMJ. 2014;349:g5293.

48.

Vandenbroucke JP, le Cessie S. Commentary: regression discontinuity design: let’s give it a try to evaluate medical and public health interventions. Epidemiology. 2014;25(5):738–741.

49.

Lipsitch M, Tchetgen TE, Cohen T. Negative controls: a tool for detecting confounding and bias in observational studies. Epidemiology. 2010;21(3):383–388.

50.

Jackson LA, Jackson ML, Nelson JC, Neuzil KM, Weiss NS. Evidence of bias in estimates of influenza vaccine effectiveness in seniors. Int J Epidemiol. 2006;35(2):337–344.

51.

Johannesdottir SA, Horvath-Puho E, Dekkers OM, et al. Use of glucocorticoids and risk of venous thromboembolism: a nationwide population-based case-control study. JAMA Intern Med. 2013;173(9):743–752.

52.

Lawlor DA, Tilling K, Davey Smith G. Triangulation in aetiological epidemiology. Int J Epidemiol. Epub 2017 Jan 20.

53.

Slaughter JL, Reagan PB, Newman TB, Klebanoff MA. Comparative effectiveness of nonsteroidal anti-inflammatory drug treatment vs no treatment for patent ductus arteriosus in preterm infants. JAMA Pediatr. Epub 2017 Jan 3.

54.

Isong IA, Richmond T, Kawachi I, Avendano M. Childcare attendance and obesity risk. Pediatrics. 2016;138(5):e20161539.

55.

Valley TS, Walkey AJ, Lindenauer PK, Wiener RS, Cooke CR. Association between noninvasive ventilation and mortality among older patients with pneumonia. Crit Care Med. Epub 2016 Oct 5.

56.

Carroll R, Metcalfe C, Steeg S, et al. Psychosocial assessment of self-harm patients and risk of repeat presentation: an instrumental variable analysis using time of hospital presentation. PLoS One. 2016;11(2):e0149713.

57.

Boef AG, Souverein PC, Vandenbroucke JP, et al. Instrumental variable analysis as a complementary analysis in studies of adverse effects: venous thromboembolism and second-generation versus third-generation oral contraceptives. Pharmacoepidemiol Drug Saf. 2016;25(3):317–324.

58.

Brooke BS, Goodney PP, Kraiss LW, Gottlieb DJ, Samore MH, Finlayson SR. Readmission destination and risk of mortality after major surgery: an observational cohort study. Lancet. 2015;386(9996):884–895.

Creative Commons License © 2017 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.