Skip to main content

Impacts of innovation in dental care delivery and payment in Medicaid managed care for children and adolescents

Abstract

Background

We evaluated a 14-county quality improvement program of care delivery and payment of a dental care organization for child and adolescent managed care Medicaid beneficiaries after 2 years of implementation.

Methods

Counties were randomly assigned to either the intervention (PREDICT) or control group. Using Medicaid administrative data, difference-in-difference regression models were used to estimate PREDICT intervention effects (formally, “average marginal effects”) on dental care utilization and costs to Medicaid, controlling for patient and county characteristics.

Results

Average marginal effects of PREDICT on expected use and expected cost of services per patient (child or adolescent) per quarter were small and insignificant for most service categories. There were statistically significant effects of PREDICT (p < .05), though still small, for certain types of service:

  1. (1)

    Expected number of diagnostic services per patient-quarter increased by .009 units;

  2. (2)

    Expected number of sealants per patient-quarter increased by .003 units, and expected cost by $0.06;

  3. (3)

    Total expected cost per patient-quarter for all services increased by $0.64.

These consistent positive effects of PREDICT on diagnostic and certain preventive services (i.e., sealants) were not accompanied by increases in more costly service types (i.e., restorations) or extractions.

Conclusion

The major hypothesis that primary dental care (selected preventive services and diagnostic services in general) would increase significantly over time in PREDICT counties relative to controls was supported. There were small but statistically significant, increases in differential use of diagnostic services and sealants. Total cost per beneficiary rose modestly, but restorative and dental costs did not. The findings suggest favorable developments within PREDICT counties in enhanced preventive and diagnostic procedures, while holding the line on expensive restorative and extraction procedures.

Peer Review reports

Background

Overview

Access to dental care has improved for children from low-income families in the United States in recent decades [1]. These changes have occurred because of Medicaid expansion, growth of Federally Qualified Health Centers, and development of private and not-for-profit dental care organizations [2]. Access nevertheless remains uneven with rural areas having lower rates of care than areas with greater population density [3].

Delivery system changes, in which many Medicaid recipients are now served by dental care organizations (DCOs), have created an opportunity to test hub and spoke models [4]. In such models, programs in schools and community facilities provide basic care delivered by expanded practice dental hygienists (EPDHs) or therapists and refer complex care to hubs staffed by general dentists and specialists.

In parallel with delivery changes, protocols have been developed to lower disease levels. Examples include using caries risk status to determine the amount and intensity of preventive and restorative care including fissure sealants and application of topical fluorides and silver diamine fluoride to prevent or arrest developing dental caries lesions [5, 6]. Hub and spoke models potentially could implement these evolving care protocols efficiently [4, 5].

In this paper, we examine the quality improvement initiative of one DCO that developed a hub and spoke model in rural areas where it had substantial market penetration. This study analyzes intervention effects on utilization and cost of dental services for children and adolescents. This intervention, The Population-centered Risk- and Evidence-based Dental Interprofessional Care Team (PREDICT), was part of the Robert Wood Johnson Foundation Finding Answers: Solving Disparities through Payment and Delivery System Reform program [7].

Methods

Setting

The DCO, Advantage Dental Services (ADS), was a dentist-owned, limited liability corporation. It delivered services to almost 300,000 Medicaid enrollees, primarily in rural Oregon state, U.S.A. through more than 50 staff model clinics and over 1100 contracted primary care and specialist dentists. The quality improvement (QI) project and evaluation included children and adolescents (18) assigned by Oregon Health Authority (OHA) to the DCO.

Oregon state is on the vanguard of oral health care transformation in the U.S.A. Since 2012 the state’s Medicaid purchasing authority has paid each coordinated care organization (CCO) a global budget capitation per member per month (PMPM). In turn, each CCO makes PMPM payments to providers of oral health services (POHs) [8]. ADS is one of those contracted POHs.

The PREDICT innovation exemplifies value-based care delivery and payment (VBP) models that are being designed to improve access, enhance quality, and lower cost in oral health care [9, 10].

The hub and spoke model

PREDICT was organized to improve access and reduce dental caries for clients in 14 counties. The study population was Medicaid-enrolled children and adolescents ages 0–18. The QI was financed by research and development funds from ADS’ global budget.

The delivery system was implemented in January 2016 in six randomly chosen test counties. In PREDICT, salaried EPDHs provided screening, risk assessment and preventive care, including non-restorative treatment of carious lesions1 in community settings [11]. Care provided by EPDHs was determined by clinical algorithms [6]. Primary care dentists (PCDs) contracted with ADS had access to the electronic dental record (EDR). Regional Manager Community Liaisons were responsible for agreements permitting EPDHs to service community settings. Case managers, who were centralized, served as patient navigators to arrange referrals for services at hub practices.

In eight control counties ADS did not change its existing delivery system or payment incentives. Care was primarily delivered by ADS-contracted practices (PCDs) and staff dentists in ADS-owned clinics but some pre-existing school-based services for screening, application of fluoride varnish, sealants, and dental health education were maintained.

Compensation model

ADS had a bonus system for all 14 counties focused on primary care owner dentists to encourage increasing access. Also, owner dentists shared yearly company profits. PCDs were paid a capitation fee per member per month and dentists employed by company owned practices were salaried.

For employees in PREDICT counties, new QI metrics focused on improving access for children and gave bonuses based on quarterly performance. Dentists and employees in PREDICT counties were subject to financial incentives based on Medicaid members’ access to dental services, preventive treatment (topical fluoride) for children and adolescents, and care within 60 days to high-risk children with urgent need [9].

Data sources

ADS enrollment files were sent to OHA by secure transfer protocol, and OHA added eligibility and claims data. ADS stripped OHA identifiers of each member, replaced them with a masked identifier, and sent the de-identified dataset to the study team. County-level variables for dentist/population ratio [12], percent medically uninsured [13], and population density [14] were derived from state reports.

Dependent variables measuring utilization and cost

Utilization and cost were measured per person per quarter to capture seasonal variation and to increase the granularity of analysis of utilization and cost over time. The four-year observation period was 2014–2017; 2014–2015 was the baseline (pre-intervention) period, and 2016–2017 was the intervention period.

The following services are the study’s dependent measures: any service (CDT D0000-D9999), preventive (D1000–1999), fluoride varnish (D1206), preventive silver diamine fluoride without fluoride varnish (D1208), sealant (D1351), caries arrest (D1354); diagnostic (D0001–0999); restorative (D2000–2999); and extractions (D7111, D7140).

Specific service utilization was measured as a count variable per quarter. Each distinct, billable service or procedure received a count of 1 in the utilization measure. Costs per quarter were expressed as shadow costs, i.e., what a given service would have cost Medicaid if the allowed fee were the actual transaction price. Shadow costs were imputed from the OHA-allowed fee schedule in 2018 dollars. Services not in the schedule were assigned the average allowed fee for their service category (e.g., preventive, restorative). Since shadow costs were fixed over time, changes in shadow cost over time reflect changes in volume of use of each service and mix of services within category (total units of service and in their unit costliness), not general dental price inflation.

Independent variables

To adjust for potential differences between PREDICT and control counties in personal and area-level determinants of utilization and cost, the following covariates were included in all statistical models: age (0–5, 6–12, and 13–18), gender, self-reported race/ethnicity (White, Hispanic non-White, and Other), days of coverage during the quarter (0–29, 30–59, 60–90), and county characteristics (dentist/population ratio, % medically uninsured, population density per square mile).

Analytic plan

The evaluation plan was to estimate the effect of PREDICT on utilization and cost of services. The principal hypothesis was that PREDICT might increase primary dental care costs, but could reduce total care costs. The underlying thesis was that by increasing access to services in general (expressed as increased use of any dental services and, in particular, preventive and diagnostic services), PREDICT could decrease utilization of expensive restorative and extraction services. Accordingly, the model estimates PREDICT effects on utilization and costs of specific services.

The basic statistical model is:

Equation (1): Utilization and Cost = f (gender, age, race/ethnicity, length of dental benefit coverage, county characteristics, indicator for residing in PREDICT (intervention) county, a time indicator for pre- vs. post intervention period, interaction term for residing in PREDICT county*post-intervention time period, random error). The interaction term coefficient estimates the effect of the PREDICT intervention.

This analysis employs a difference-in-differences (DID) approach [15,16,17]. Stata Release 15 was used for all analyses, and the command “vce(cluster)” was used to calculate robust standard errors adjusted for within-county correlation and heteroskedasticity [18]. The intervention effect is measured as change in utilization and cost per patient per quarter from pre-to-post intervention in PREDICT versus control counties. Because of the small number of counties, and spillover effects between adjacent PREDICT and control counties were possible, DID in individual enrollee-level analyses, including available county-level characteristics, was used to eliminate omitted variable bias due to unobserved differences between PREDICT and control counties in enrollee characteristics that do not change over time.

Figure 1 illustrates the logic of the DID approach in estimating intervention effects. Intervention effects are estimated by subtracting change over time in the outcome of interest in the control group from its change over time in the intervention group. The control group is chosen to reflect what would have happened in the intervention group if it were not subject to the intervention (the “unobserved counterfactual” in Fig. 1).

Fig. 1
figure 1

Estimating Intervention Effects in Difference-in-Differences. Source: Columbia Public Health, Difference-in-difference estimation: the parallel trends assumption. URL: https://www.publichealth.columbia.edu/research/population-health-methods/difference-difference-estimation

Parallel trend testing

To produce unbiased estimates of PREDICT effects, DID assumes that baseline trends in utilization and cost of the PREDICT and control groups are parallel. This test is required to rule out confounding by omitted factors that vary over time, differ between the intervention and control groups. and which affect the dependent variable [17]. The detailed results of those tests are in Additional file 1 (Figures FA1, FA2, and FA3). We perform a more clearcut test for baseline parallel trends, which uses a linear term for quarter time (valued from 1 to 8), includes season dummies and all other covariates in Eq. (1). The test for parallel baseline linear trends (the null hypothesis) is based on the interaction between the intervention group dummy and the linear time variable. If this interaction is statistically significant (p < .05), the null hypothesis of parallel trends is rejected. In Table 5 columns (1), and (5) the results of this test are displayed for the expected nine utilization and expected cost variables, respectively.

Regression model functional forms

Estimates of PREDICT effects on outcomes were based on zero-inflated negative binomial regression (ZINB) for count variables of dental service utilization, which account for large numbers of zero utilization [19], and two-part regression models (TPM) for dental services cost variables [20]. ZINB and TPM produce more robust and efficient estimates of expected utilization counts and expected costs, respectively, by combining estimates of probability of any utilization or cost with estimates of level of utilization or cost, given any utilization or cost.

ZINB analyses deployed logistic regression for the probability of utilization portion and a negative binominal model for the count portion (level of utilization, given positive utilization). TPM regressions also estimated logistic models for probability of any cost and generalized linear model (GLM) for level of cost, given positive cost. GLM was based on a log link and gamma distribution. Specification tests revealed ZINB models of expected utilization to be superior to the alternative of zero-inflated Poisson. Deb and Norton, in their review paper on health care utilization and expenditures, have summarized the superiority of these models over ordinary least squares and other techniques [21].

Regression model specifications

Regression analyses used four specifications to identify PREDICT effects. To estimate the average PREDICT effect over the entire intervention period, the first specification (the” All” model) of Eq. (1) uses a single PREDICT*Post-Intervention Period interaction term.. The effect of PREDICT is estimated as the average marginal effect, following the model specified in Eq. (1), computed as the average difference between individuals in the PREDICT and control group, respectively, of individual estimates over the entire study sample. This model matches the DID model specification used by Card and Krueger in their classic study of minimum wage effects [22]. Those results are presented in columns (2) and (6) for each dental service type in Table 5 in the Results section.

The second regression specification (“R_Full”) adds a vector of “seasonality” dummy variables to the Eq. (1) model, in order to adjust for seasonal variations in dental utilization over the seasons of the year. By including dummies for seasonality, the estimate of PREDICT’s average effect is not distorted by independent quarterly (seasonal) variations over the calendar year. Those “R_Full” model estimates are presented in columns (3) and (7) of Table 5.

The third regression specification (“R_Reduce”) is included as a general check on the sensitivity of our average marginal effect estimates to the inclusion of covariates. This model excludes all covariates, except the seasonality dummies. If randomization of counties to the PREDICT intervention had produced a perfectly balanced set of individual and county characteristics between the PREDICT and control counties (and thus eliminated any confounding), one would expect the PREDICT average marginal effect estimates to be identical for the R_Full and R_Reduce specifications. The R_Reduce model estimates are presented in Columns (4) and (8) of Table 5.

In a fourth regression specification, PREDICT effects in each post-intervention quarter are estimated, using eight unique interaction terms of PREDICT*Intervention Quarter. Those interactions isolate PREDICT differential effects over time relative to the baseline period (DID). This specification allows one to disaggregate the impact of PREDICT over time. The estimated average marginal effects of PREDICT on dental services utilization and cost by quarter from this specification are displayed in Figs. 2, 3, and 4 in the Results section.

Fig. 2
figure 2

Average Marginal Effect on Any Dental, Diagnostic, and Preventive Services (Post-vs. baseline)

Fig. 3
figure 3

Average Marginal Effect on Fluoride Varnish, Topical Fluoride, and Sealants (Post- vs. baseline)

Fig. 4
figure 4

Average Marginal Effects on Caries Arrest, Restorative, and Extractions (Post-intervention vs. baseline)

Results

Study results are presented in two parts. The first displays unadjusted descriptive data for children and adolescents (age 18), distinguishing between PREDICT and control counties. The second presents findings of regression analyses of impacts of PREDICT on utilization and costs.

Descriptive results

Beneficiary demographics and county-level characteristics

Table 1 shows that distributions of age, gender, length of Medicaid dental coverage, and county-level percent medically uninsured are quite similar between PREDICT and control counties at baseline and remain so during the intervention. While statistically significant, differences in length of coverage and percent uninsured are quite small and not likely to be practically meaningful. In contrast, race/ethnicity, county-level dentists per 100,000 population, and population density do differ between PREDICT and control counties. The percentage of white youth is roughly 10 percentage points greater in PREDICT counties. The dentist-to-population ratio is approximately 15 percentage points higher in the PREDICT counties, and population density in PREDICT counties is more than double that of control counties (18 percentage points higher). Clearly, PREDICT counties are less diverse in their race/ethnicity, denser in population, and greater in dentist supply relative to population. These differences, potentially important for dental outcomes, are included as covariates in the regression models and are relatively stable over time. Thus, validity of our DID estimates of PREDICT effects is not likely compromised by these differences.

Table 1 Individual sociodemographic characteristics and county-level characteristics

Annual utilization

Tables 2 and 3 indicates that across all types of services -- use of any dental service and any of eight specific service categories – percentage of users and count of use (if any) per year are similar between PREDICT and control group members. This similarity holds for baseline years and the majority of service types in intervention years, except for count of use of topical fluoride in 2016, as well as percentage of users of any dental, preventive and diagnostic services in 2017.

Table 2 Utilization among all covered children and adolescents (Age < = 18) by group and year (2014–2015)
Table 3 Utilization among all covered children and adolescents (age < = 18) by group and year (2016–2017)

Annual average costs of services

Annual cost patterns by service category are essentially the mirror image of annual utilization and described in Table 4. Effectively, costs “monetize” utilization per person by multiplying a constant shadow cost per unit of specific service utilization by count for each specific service.

Table 4 Costs of dental services for all covered children (Age < = 18) with at least one use per year

Regression results

In this section, only utilization and cost measures significantly affected by PREDICT (with p < .05) are discussed, based on ZINB regressions for expected quarterly utilization and two-part models (TPM) for expected quarterly cost. Detailed results, including estimates for effects of PREDICT and covariates for personal and area characteristics are in the Additional file 1.

Regression estimates of overall PREDICT effects on specific services

Table 5 summarizes PREDICT average marginal effects on expected use and cost of each dental service type over the entire 8-quarter intervention period. Because the “R_Full” model in Table 5 (columns 3 and 7) takes into account all covariates in Eq. (1), plus the seasonality dummies, we focus on the results of that model. These results complement the next section’s presentation of PREDICT effects by quarter in Figs. 2, 3, and 4.

Table 5 Average marginal effect of expected use and cost & covariate[−adjusted baseline trend significance tests

Three statistically significant (p < .05) average marginal effects of PREDICT per quarter per child were observed in the R_Full model (our preferred specification) (1) an increase of $0.64 in total expected cost for all services received; (2) an increase of 0.009 units in expected use of diagnostic services; (3) increase of 0.003 units and $0.06 in expected use and expected cost of sealants, respectively.

Regression estimates of PREDICT effects by quarter on specific services

Figure 2 (Panels A thru F) displays estimated average marginal effects (AME) by quarter of PREDICT on expected utilization and expected cost of dental services overall, diagnostic, and preventive services. The AMEs estimate the combined effect of PREDICT on each dependent measure, integrating impact on probability of any utilization (or cost) with impact on level of utilization (or cost), given positive utilization (or cost). Except for a slight decline (< $0.10; p < .05) of expected quarterly diagnostic service costs in intervention quarter 2, none of these quarterly effects was statistically significant. In contrast, use of diagnostic services over the entire intervention period was significantly higher (Table 5) in the PREDICT group which is consistent with the mostly positive point estimates by quarter in Fig. 3.

DID regression model results for expected use of any services

Table 6 explores the determinants of two utilization measures: (1) probability of use of any service and (2) number of services given any use (Panel A). Use of any service was an important component of incentive metrics used in the compensation model. This sub-analysis offers a more detailed examination of determinants of overall access to dental services (any use).

Table 6 Use and cost of any dental services: difference-in-differences analyses

Panel A reveals that, relative to ages 0–5 (i.e., pre-school: the reference category), children ages 6–12 were the most likely to receive any care (24.4 percentage points more than pre-school), followed by teenagers. Relative to white children and adolescents, Hispanic non-white adolescents were more likely to access some care (by 2.7 percentage points), and other non-White racial and ethnic groups were less likely (by 10.4 percentage points).

More days of coverage within quarter increased probability of utilization. PREDICT effects on probability of utilization were positive in all but two intervention quarters, but generally small and statistically insignificant -- except for quarter 1 (3.0 percentage point increase) and quarter 5 (4.7 percentage point increase). Population density had a statistically significant, but very small, impact on level of utilization of any services. Neither percent of population medically uninsured nor dentist-to-population ratio significantly affected children’s utilization or total costs.

Figure 3 (Panels A thru F) illustrates average marginal effects of PREDICT by quarter on expected use and cost of fluoride varnish, preventive silver diamine fluoride without varnish, and sealants. The only statistically significant PREDICT effect is a small increase in intervention quarter 5 in expected utilization of sealants (.01 sealant per child) and expected sealant cost ($2 per child). These quarterly impacts are consistent with results in Table 5 over the entire intervention period: a small average increase in expected use and costs of sealants of .003 sealants and $0.064 per child per quarter.

Figure 4 depicts PREDICT effects for caries arrest, restorative services, and extractions. Only one average marginal effect is statistically significant: a decline of approximately $2 per person in expected cost of caries arrest services in quarter 8. Given the close relationship between expected cost and expected use, failure of the ZINB count model for caries arrest to converge casts some doubt on validity of this estimate.

Discussion

The principal hypothesis of this study – that primary dental care (preventive and diagnostic) would increase significantly over time in PREDICT counties relative to controls – was somewhat supported in our findings (see Table 5). There were small, but statistically significant increases in differential use of diagnostic services, topical fluoride, and sealants. However, there was no decline in total dental costs. Instead children and adolescents in PREDICT counties experienced a small, but statistically significant rise in total costs. While there was no clear tradeoff between primary care and more expensive procedures, neither did restorative and extraction procedures increase significantly.

The incentive payment dollars earned by participating providers in the PREDICT (test) counties were intentionally not included in the costs measured in this study. By using allowed payments by Medicaid for each service as our “cost” metric, the authors chose a “shadow cost” to reflect the mutually acceptable amount that Medicaid was willing to pay for each service and that the providers would accept to cover the provider’s cost of delivering that service. The study was not intended to assess the cost- effectiveness of the incentive program. Such a study would have included Medicaid’s direct cost of the incentive payments, as well as the costs to Medicaid and participating providers alike of implementing the incentive program. The authors explicitly acknowledge that such a study (not this one) might demonstrate that the direct cost of incentive payments plus any program implementation costs exceeded any dental service cost savings realized as a result of PREDICT.

Limitations

The relatively brief observation period (2 years) to identify clinically and economically meaningful and statistically significant effects of PREDICT might account for the modest impacts described in this paper. Changes in care delivery and provider incentives constituted major innovations relative to prevailing arrangements, but previous research has established that full impacts of major innovation generally require several years to appear [23, 24]. Moreover, the hub and spoke model was not fully implemented because not all school sites were covered, and hub practices continued to see a significant number of children and adolescents who potentially could have been seen at community sites.

The small number of counties in the randomization might mean that imbalances in provider, patient, and area characteristics remain between PREDICT and control counties. To confound our estimates of intervention effects, however, these potentially omitted variables would have to be correlated with our utilization and cost variables. Our tests of parallel baseline trend suggest that such potential confounding is not present in this study. Furthermore, the cluster-randomized design does rule out systematic selection bias The authors therefore argue that our regression covariates have captured salient differences between the intervention and control groups and have effectively eliminated potential omitted variable bias in the estimates.

Spillover effects on care models and compensation between PREDICT and control counties cannot be ruled out, especially given the presence of ADS in both sets of counties and inherent sharing of best practices among providers and practice administrators in a DCO. Such spillover effects likely reduced observed differences.

Summary

On the whole, our findings, combined with the foregoing caveats, imply that these estimates of certain modest PREDICT effects do not definitively demonstrate causation or substantial change attributable to the intervention. That said, our formal statistical tests suggest that the modest PREDICT effects observed in this study are not tainted by bias and are plausibly attributable to this intervention. Our findings illustrate the challenge in bending the cost curve and achieving improved outcomes.

Only a few categories of utilization and cost for children and adolescents enrolled in Medicaid were significantly influenced by PREDICT. However, a close look at this study’s findings suggests favorable developments within PREDICT counties in enhancing preventive and diagnostic procedures, while simultaneously holding the line on expensive restorative and extraction procedures. In that sense, efforts by this DCO may be bearing fruit, even if certain impacts were not unambiguously attributable to PREDICT per se. This study adds significantly to a sparse literature on dental care QI and innovations of this type.

Availability of data and materials

The data that support the findings of this study are available from the Dr. Milgrom upon reasonable request.

References

  1. Dye BA, Mitnik GL, Iafolla TJ, Vargas CM. Trends in dental caries in children and adolescents according to poverty status in the United States from 1999 through 2004 and from 2011 through 2014. JADA. 2017;148(8):550–65.e7.

    PubMed  Google Scholar 

  2. Bailit HL, Milgrom P. Delivery of oral health care in the United States. In: Mascarenhas AK, Okunseri C, Dye BA, editors. Burt and Eklund’s Dentistry, Dental Practice, and the Community. 7th ed. St.Louis: Elsevier.

  3. Vargas CM, Ronzio CR, Hayes KL. Oral health status of children and adolescents by rural residence, United States. J Rural Health. 2003;19(3):260–8.

    Article  Google Scholar 

  4. Bailit H, Beazoglou T, Drozdowski M. Financial feasibility of a model school-based program in different states. Public Health Rep. 2008;123(6):761–7.

    Article  Google Scholar 

  5. Compton R. Business barriers and opportunities for transforming to preventive care to treat early childhood caries. Pediatr Dent. 2015;37(3):288–93.

    PubMed  Google Scholar 

  6. Hansen R, Shirtcliff RM, Ludwig S, Dysert J, Allen G, Milgrom P. Changes in silver diamine fluoride use and dental care costs: a longitudinal study. Pediatr Dent. 2019;41(1):35–44.

    PubMed  Google Scholar 

  7. DeMeester RH, Xu LJ, Nocon RS, et al. Solving disparities through payment and delivery system reform: a program to achieve health equity. Health Aff (Millwood). 2017;36(6):1133–9. https://0-doi-org.brum.beds.ac.uk/10.1377/hlthaff.2016.0979.

    Article  Google Scholar 

  8. Conrad DA, Milgrom P, Shirtcliff RM, et al. Pay-for-performance incentive program in a large dental group practice. JADA. 2018;149(5):348–52.

    PubMed  Google Scholar 

  9. Riley W, Doherty M, Love K. A framework for oral health care value-based payment approaches. JADA. 2019;150(3):178–85.

    PubMed  Google Scholar 

  10. Vujicic M. The dental care system is stuck and here is what to do about it. JADA. 2018;149(3):167–9.

    PubMed  Google Scholar 

  11. Elgreatly A, Kolker JL, Guzman-Armstrong S, Qian F, Warren JJ. Management of initial carious lesions: Iowa survey. JADA. 2019;150(9):755–65.

    PubMed  Google Scholar 

  12. Oregon Health Authority. Oral Health in Oregon. A metrics report. 2017. https://perma.cc/F3A8-XZPR. Archived September 4, 2019.

    Google Scholar 

  13. State Health Access Data Assistance Center. Uninsurance Rates for Oregon for 2012–2016. URL: https://perma.cc/4YPB-FPYK. Archived September 4, 2019.

  14. USA.com. Oregon population density county rank. URL: https://perma.cc/8HCF-WJGQ. Archived September 4, 2019.

  15. Puhani P. The treatment effect, the cross difference, and the interaction term in nonlinear “difference-in-difference” models. Econ Lett. 2012;115(1):85–7. https://0-doi-org.brum.beds.ac.uk/10.1016/j.econlet.2011.11.025.

    Article  Google Scholar 

  16. Angrist J, Pischke JS. Differences-in-differences. In: In mostly harmless econometrics: an Empiricist’s companion. NJ: Princeton University Press; 2009. p. 227–42.

    Chapter  Google Scholar 

  17. Columbia Public Health, Difference-in-difference estimation: the parallel trends assumption. URL: https://www.publichealth.columbia.edu/research/population-health-methods/difference-difference-estimation. Accessed 7 May 2021.

  18. Stata Manual on Function of VCE, URL: https://www.stata.com/manuals/xtvce_options.pdf. Accessed 5Mar 2021.

  19. Stata.com. Zero-inflated negative binomial regression: 1–9. https://www.stata.com/manuals/rzinb.pdf Accessed 7 May 2021.

  20. Belotti F, Deb P, Manning WG, Norton EC. Twopm: two-part models. Stata J. 2015;15(1):3–20 URL: https://www.stata-journal.com/sj15-1.html. Accessed 7 May 2021.

    Article  Google Scholar 

  21. Deb P, Norton EC. Modeling health care expenditures and use. Ann Rev Public Health. 2018;39:489–505.

    Article  Google Scholar 

  22. Card D, Krueger AB. Minimum wages and unemployment: a case study of the fast-food industry in New Jersey and Pennsylvania. AER. 1994;84(4):772–93.

    Google Scholar 

  23. Rogers E. Diffusion of innovations. 5th ed. New York: The Free Press; 2003.

    Google Scholar 

  24. Hwang J, Christensen CM. Disruptive innovation in health care delivery: a framework for business-model innovation. Health Aff. 2008;27(5):1329–35.

    Article  Google Scholar 

Download references

Acknowledgements

The authors honor the memory of Howard L. Bailit, DDS PhD, who recently passed away, who contributed mightily to the early design and evaluation of this quality improvement project. Charles Spiekerman, PhD, who also recently passed away, made outstanding contributions to this evaluation as statistician and research design expert, as well as a co-author of the statistical results in the initial versions of this paper. Gary Allen DMD, Melissa Mitchell, and Jeanne Dysert, all Advantage staff members, contributed to the development and implementation of PREDICT.

Funding

The quality improvement project was financed internally by research and development funds from the company’s global budget. The evaluation was supported by a grant number 72558 from the Robert Wood Johnson Foundation as part of the Finding Answers: Solving Disparities through Payment and Delivery System Reform program and an unrestricted gift from R. Mike Shirtcliff.

Author information

Authors and Affiliations

Authors

Contributions

DAC, PM, JCC, SL, and RMS conceived evaluation. DAC, JCC, and YD processed the deidentified data and analyzed the data. PM was the principal investigator of the evaluation. YD performed the primary statistical analyses. DAC and PM drafted the first version of the manuscript and all of the authors reviewed and contributed to the final version of the manuscript for submission. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Peter Milgrom.

Ethics declarations

Ethics approval and consent to participate

This deidentified analysis was determined not to be human subjects research by the Human Subjects Division at the University of Washington, Seattle (Study No. 00006095), as defined by U.S. federal and state regulations. All methods were performed in accordance with relevant guidelines and regulations and participants consented/assented to participate in the care program under Oregon USA state law.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Appendix 1.

Parallel Trend Tests. To assess the validity of our DID estimates of PREDICT effects, we briefly summarize the results of parallel trend tests for specific services. Our parallel trend tests for each service type examine whether the regression-adjusted differences in baseline values of utilization and cost between the PREDICT and control group are statistically significant, examining each of the eight baseline quarters in 2014 and 2015. Appendix 2. Estimated Difference-in-Difference (DID) Regression Models.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Conrad, D.A., Milgrom, P., Du, Y. et al. Impacts of innovation in dental care delivery and payment in Medicaid managed care for children and adolescents. BMC Health Serv Res 21, 565 (2021). https://0-doi-org.brum.beds.ac.uk/10.1186/s12913-021-06549-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12913-021-06549-3

Keywords