Skip to main content

Tracking implementation strategies in the randomized rollout of a Veterans Affairs national opioid risk management initiative

Abstract

Background

In 2018, the Department of Veterans Affairs (VA) issued Notice 2018-08 requiring facilities to complete “case reviews” for Veterans identified in the Stratification Tool for Opioid Risk Mitigation (STORM) dashboard as high risk for adverse outcomes among patients prescribed opioids. Half of the facilities were randomly assigned to a Notice version including additional oversight. We evaluated implementation strategies used, whether strategies differed by randomization arm, and which strategies were associated with case review completion rates.

Methods

Facility points of contact completed a survey assessing their facility’s use of 68 implementation strategies based on the Expert Recommendations for Implementing Change taxonomy. We collected respondent demographic information, facility-level characteristics, and case review completion rates (percentage of high-risk patients who received a case review). We used Kruskal-Wallis tests and negative binomial regression to assess strategy use and factors associated with case reviews.

Results

Contacts at 89 of 140 facilities completed the survey (64%) and reported using a median of 23 (IQR 16–31) strategies. The median case review completion rate was 71% (IQR 48–95%). Neither the number or types of strategies nor completion rates differed by randomization arm. The most common strategies were using the STORM dashboard (97%), working with local opinion leaders (80%), and recruiting local partners (80%). Characteristics associated with case review completion rates included respondents being ≤ 35 years old (incidence rate ratio, IRR 1.35, 95% CI 1.09–1.67) and having < 5 years in their primary role (IRR 1.23; 95% CI 1.01–1.51), and facilities having more prior academic detailing around pain and opioid safety (IRR 1.40, 95% CI 1.12–1.75). Controlling for these characteristics, implementation strategies associated with higher completion rates included (1) monitoring and adjusting practices (adjusted IRR (AIRR) 1.40, 95% CI 1.11–1.77), (2) identifying adaptations while maintaining core components (AIRR 1.28, 95% CI 1.03–1.60), (3) conducting initial training (AIRR 1.16, 95% CI 1.02–1.50), and (4) regularly sharing lessons learned (AIRR 1.32, 95% CI 1.09–1.59).

Conclusions

In this national evaluation of strategies used to implement case reviews of patients at high risk of opioid-related adverse events, point of contact age and tenure in the current role, prior pain-related academic detailing at the facility, and four specific implementation strategies were associated with case review completion rates, while randomization to additional centralized oversight was not.

Trial registration

This project is registered at the ISRCTN Registry with number ISRCTN16012111. The trial was first registered on May 3, 2017.

Peer Review reports

Background

Concerns about opioid misuse and overdose have led to substantial efforts to decrease high-risk opioid prescribing and improve opioid safety [1,2,3,4,5,6,7]. The Veterans Health Administration (VHA) has been on the forefront of opioid risk assessment and implementation of risk-mitigation strategies, such as the VA Opioid Safety Initiative, which has resulted in a rapid decrease in opioid prescribing—primarily decreasing initiation of long-term use—among Veterans who use VHA [4,5,6,7,8]. As we increase our understanding of the factors associated with adverse opioid-related events among patients prescribed opioids, we can identify sub-groups of patients who may benefit from further assessment and intervention.

As a part of a concerted national effort to identify patients prescribed opioids who are at high risk for adverse events, VA developed the Stratification Tool for Opioid Risk Mitigation (STORM), which uses a predictive model to incorporate electronic health record (EHR)-derived risk factors into a summary score that predicts risk of overdose- or suicide-related health care events or death in patients prescribed opioid analgesics [6, 9]. The VA Office of Mental Health and Suicide Prevention (OMHSP) then developed a suite of web-based reports to convey relevant information to support effective risk management strategies for Veterans at high risk of adverse events, subsequently referred to in this paper as the “STORM dashboard.” In April 2018, the VA issued Notice 2018-08 (updated the next year to Notice 2019-15) [10] requiring VA Medical Centers (“facilities”) to complete case reviews for Veterans prescribed opioids that the STORM risk model identified as being at very high risk for adverse outcomes. Case reviews involved using the STORM dashboard, chart review, or other data-based procedures to evaluate each patient’s risk level and to determine whether additional risk mitigation strategies (e.g., referral to a pain specialist, prescription of naloxone) were indicated. To examine the impact of mandated centralized oversight of implementation completion, half of the facilities were randomly assigned to receive a version of Notice 2018-08 that included language indicating that additional oversight—including the requirement of local action plans and receipt of external support—if facility case review completion rates (the percent of very high risk patients receiving a case review) were < 97% 6 months following the notice [9]. Our evaluation team was tasked with understanding how facilities responded to these policy notices in terms of implementation activities and case review completion rate [11].

We focused our evaluation on facility-level implementation strategies, i.e., specific methods or techniques used to enhance the adoption and sustainment of an evidence-based intervention or practice [12]. Without such strategies, the adoption and successful implementation of new treatments or processes often fail. The Expert Recommendations for Implementing Change (ERIC) group recently specified, defined, and clustered 73 implementation strategies [13,14,15]. We previously developed a tool to assess ERIC-defined implementation strategies in the context of VA’s national effort to expand the use of newer, more effective hepatitis C (HCV) medications [16, 17]. In these prior efforts, we found that the number of strategies used was associated with more treatment starts and that specific strategies were associated with improved uptake of the newer evidence-based medications. However, it was unclear whether this survey could inform other implementation efforts.

To understand how facilities responded to the policy notice in this national randomized program evaluation, we aimed to (1) assess implementation strategies that facilities used to implement case reviews, (2) determine whether those strategies differed for facilities receiving policy notices including additional oversight from the VA national office providing implementation support, and (3) examine the association between implementation strategies, respondent characteristics, facility characteristics, and case review completion rates.

Methods

Regulatory approval

The VA Pittsburgh Healthcare System Institutional Review Board approved this project as research. The protocol for this implementation evaluation was published previously [11].

Survey development

Facilities were randomized to receive one of two versions of a policy notice that had the same specified goal but varied in the oversight they received. The survey was designed to identify how facilities responded to these notices and to characterize the implementation strategies they used. To measure the implementation strategies that facilities chose, the evaluation team adapted our prior implementation strategy survey to this evaluation [16, 17]. After review, four financial strategies were deemed irrelevant in VA and omitted from the survey (“capitated payments,” “change liability laws,” “place innovation on fee for service lists/formularies,” and “alter patient/consumer fees”). Furthermore, “develop and organize quality monitoring systems” was omitted because it was inherent to the STORM dashboard. Relevant examples were added to the survey as parenthetical statements where this was deemed necessary for clarity. The remaining 68 items were reviewed by partners at OMHSP, and minor edits were made to improve the clarity of the survey (Additional file 1). The strategies were arranged in the survey in 9 previously defined implementation clusters [14] and respondents were asked (1) whether facilities used each strategy to promote case review completion (yes/no) and (2) whether this strategy was used in response to the policy notice or was otherwise used at the facility. The latter question was included because opioid safety is an ongoing national VA priority prior to and including the time of the study, and some facilities may have instituted practices to improve opioid safety prior to or independent of Notice 2018-08. The survey also assessed the following demographic questions of participants: sex, gender, age, race/ethnicity, degree, professional role, hours spent in patient care per week, and opioid prescribing privileges (yes/no).

Participant recruitment

OMHSP provided the email addresses for a point of contact responsible for implementing the policy notice at each facility. Of 140 facilities, 136 had a designated point of contact. We emailed initial survey invitations along with an electronic survey to primary points of contact at the end of Fiscal Year 2019, 6 months after the policy notice was released. We sent up to two weekly reminder emails followed by phone calls and instant messages through the VA communication system as necessary. If a primary point of contact did not respond and a secondary point of contact was on record, the secondary point of contact was contacted (n = 10). If a point of contact suggested a colleague who would be better suited to complete the interview, we contacted that colleague (n = 5).

Case review completion rates

Case review completion rates for each facility were calculated as the number of case reviews that were completed for very high-risk patients (numerator) over the number of patients in the very high-risk category (denominator) between July to September 2018 (the fourth quarter of the VA fiscal year) for each facility. These data were available on a dashboard used by OMHSP to report a wide range of quality improvement metrics. A link to this dashboard and the case review metrics were prominently posted on the STORM dashboards. Our partners granted us access to the data to contribute to quality improvement efforts in VA.

Facility-level covariates

The randomization arm was included as the primary independent variable. Sites were coded as receiving a policy notice with versus without language requiring additional oversight if the target case review completion rate was not achieved. We examined several facility characteristics described in detail in our published evaluation protocol [11]. In brief, we examined organizational factors following Glasgow et al.’s analytic framework, which includes two broad dimensions: (1) facility structure variables (e.g., rurality) and (2) staffing/culture variables (e.g., psychological safety). We focused on data available across VA facilities and those that stakeholders defined as most relevant to the implementation effort [18]. The “total number of outpatient visits per year” included all visits at the facility and was categorized into quartiles. Primary care panel size was defined as the number of primary care doctors per 100 patients. Facility complexity is a long-standing VA variable about the nature of the services provided at VA facilities. The score is classified numerically from 1 to 3, with level 1 being the most complex. Complexity is scored using national data on workload (patient load and acuity), research dollars, the availability of complex clinical programs (e.g., ICU care, transplant, neurosurgery), and location (i.e., rurality) [19]. As per prior work, we classified facilities as level 1 (higher level) vs. other [19]. Rurality was classified as yes/no, using the VA definition of rurality, which is based on the Rural-Urban Commuting Areas System and defines urban as “census tracts with at last 30% of the population residing in an urbanized area” [20, 21]. Because a facility’s case review completion rate could be higher if fewer case reviews were required, we included each facility’s number of patients in the “very high-risk” category as a covariate, defined as the lowest quartile of case reviews vs. other quartiles at baseline. “Workplace performance” was derived from employee ratings of 6 items from the 2018 VA All-Employee Survey that assessed resource availability, training, goals, and innovation [22]. This continuous score ranges from 1 to 5, with higher scores being more favorable.

VA’s Academic Detailing Service provides targeted one-on-one training and problem-solving, normative feedback, and educational materials for providers around high-priority practice improvement and prescribing challenges [6, 23,24,25]. We included the number of academic detailing events on relevant topics that occurred at each facility in the 6 months prior to the release of the policy notice, using Academic Detailing Service data provided by VA’s Pharmacy Benefits Management Services. Specifically, this included the number of academic detailing events that covered the STORM dashboard, pain dashboard, opioid use disorder, or other pain management topics. For analyses, we included a binary variable indicating whether facilities had any academic detailing events specifically around the STORM and/or other opioid safety dashboards (yes/no). We also categorized the overall number of pain-related academic detailing events into quartiles for analyses.

Analyses

We compared facility characteristics between responding and non-responding facilities using t tests for continuous variables and chi-square tests for categorical variables and summarized the self-reported characteristics of responding points of contact using means and proportions, as appropriate. We computed a cross tabulation of each of the 68 implementation strategies overall and by randomization arm (policy notice type). We then computed the proportion of strategies used within each implementation strategy cluster [14] and the total number of strategies used overall and by randomization arm. We repeated this process to describe the number and type of strategies that were implemented as a direct response to the policy notice. We then assessed whether there were differences between the number and types of strategies used by randomization arm using Kruskal-Wallis tests.

We used negative binomial regression to model the case review completion rates with results reported as incident rate ratios (IRR). We first assessed the unadjusted associations between case review completion rates and the respondent and facility characteristics. Characteristics associated with the case review completion rates at the p < 0.15 level were included in subsequent adjusted analyses. We then modeled both the adjusted and unadjusted associations between each individual implementation strategy and the total number of strategies used and the case review completion rates. All analyses were conducted using SAS 9.4 (Cary, NC) and Stata 14 (College Station, TX).

Results

Respondents

A STORM point of contact from 101 facilities opened the survey, and 92 facilities completed surveys. Of these 92 facilities, case review completion rates were available for 89, and these 89 were included in the subsequent analyses. Responding facilities (n = 92) did not significantly differ from non-responding facilities in terms of facility-level characteristics (Additional file 2). However, there was a non-significant trend towards responding facilities being more likely than non-responding facilities to be level 1 complexity (72% vs. 54%, p = 0.07). Most responding points of contact were non-Hispanic and White (71%). Clinicians and pharmacists made up 83% of the sample, and 42% of points of contact reported that they had opioid prescribing privileges (Table 1).

Table 1 Characteristics of participants and facilities responding to ERIC survey

Implementation strategy use

Included facilities (n = 89) endorsed a median of 23 (IQR 16–31) strategies and a median of 18 (IQR 11–25) strategies used specifically because of the policy notice. The most commonly used strategies were using the STORM dashboard (97%), informing local opinion leaders of the need to complete case reviews (80%), and recruiting and cultivating relationships with local partners (80%) (Additional file 3). Overall, facilities reported that most implementation strategies (80%) were attributed to the policy notice. Figure 1 illustrates the use of individual strategies, grouped by cluster and shown in order from first to last on the survey, for individual facilities. The most commonly used implementation clusters were “adapt and tailor to the context,” “develop stakeholder interrelationships,” and “evaluative and iterative strategies.” Ninety-four to 98% of facilities reported using at least one strategy in these clusters. The least commonly used cluster of strategies was “engage consumers” (i.e., patients); only 13% of facilities used any strategies from this cluster (Fig. 2).

Fig. 1
figure 1

Strategy heat map. This figure shows the use of individual strategies, grouped by cluster, in order from first to last on the survey, for individual facilities. Each facility is a row, and the rows are sorted such that the facilities using the most strategies are at the bottom of the graphic. The box is black if a facility reported using the strategy and white if the facility did not report using the strategy

Fig. 2
figure 2

Implementation Strategy use by cluster. This figure illustrates use of strategies by cluster. The black bars illustrate the proportion of all facilities that used at least one strategy in a cluster. The white bars illustrate the proportion of facilities that reported using at least one strategy in the cluster because of the policy Notice

There were no significant differences in the total number of implementation strategies used for the oversight vs. non-oversight randomization arm (median 25 vs. 23, respectively, p = 0.80). Although the proportion of facilities that used at least one strategy in a given implementation cluster did not vary by arm, facilities that were randomized to the oversight arm used numerically more strategies from the “adapt and tailor” cluster than facilities in the non-oversight arm (median of 4 vs. 3 strategies in the cluster, p = 0.049) (Table 2).

Table 2 Number of strategies used by cluster, overall and by randomization arm

Case review completion rates

The median number of case reviews requested of the facilities at baseline was 18 (IQR 9–28). The median case review completion rate was 71% (IQR 48–95%). Overall, 18 facilities (20%) met the 97% target completion rate established a priori by OMHSP. Case review completion rates did not significantly differ by randomization arm (78% in the non-oversight arm vs. 71% in the additional oversight arm, p = 0.26). However, the non-oversight randomization arm was significantly more likely to meet the 97% target (30% vs. 11%, p = 0.04).

In univariate analyses (Table 3), respondent characteristics associated with higher case review completion rates included younger age (incidence rate ratio, IRR 1.35, 95% CI 1.09–1.67 for age ≤ 35 vs. older) and having less than 5 years in their current primary role (IRR 1.23; 95% CI 1.01–1.51). Only one facility characteristic, increased exposure to pain-related academic detailing prior to baseline, was associated with higher case review completion rates (IRR 1.40, 95% CI 1.12–1.75).

Table 3 Factors associated with case review completion rate

Controlling for these characteristics, the implementation strategies associated with higher case review completion rates (Table 4) were (1) “regular monitoring and adjusting practices as needed” (adjusted IRR, AIRR 1.40, 95% CI 1.11–1.77), (2) “identifying ways that the process of completing case reviews of very high-risk patients can be adapted to meet local needs while still maintaining the core components of the review process” (AIRR 1.28, 95% CI 1.03–1.60), and two education strategies: (3) “conducting an initial training session” (AIRR 1.16, 95% CI 1.02–1.50) and (4) “creating or participating in groups that meet regularly to discuss and share lessons learned” (AIRR 1.32, 95% CI 1.09–1.59).

Table 4 Associations between implementation strategies and case review completion rates

Discussion

We found that facility contacts used diverse implementation strategies in response to a policy notice requiring them to complete case reviews for patients prescribed opioids who were estimated to be at high risk for adverse events. Randomization to being required to receive additional centralized oversight if a facility failed to meet an a priori target did not significantly impact the implementation strategies that facilities chose. However, we did identify several respondent and facility characteristics and implementation strategies that were associated with improved case review implementation in this national opioid safety effort.

While the average facility used 23 implementation strategies, we found that only a few key strategies were associated with case review completion rates. Education and adaptation/tailoring emerged as important implementation strategies in the adjusted models. Specific strategies associated with increased case review completion rates included adjusting practices based on regular monitoring and adapting practices as needed, while retaining fidelity to critical components of the implementation effort. The ability to adapt/tailor the efforts to local needs is generally considered to be important in other implementation efforts [26, 27], and leadership supported and encouraged adaptation in the STORM implementation effort. Understanding which implementation strategies work when and in what context is the “holy grail” of implementation science and can enhance the efficiency, cost-effectiveness, and effectiveness of implementation more broadly.

In addition to the specific survey-defined strategies that were actively used during implementation, we found that pre-implementation academic detailing was one of the strongest predictors of case review completion. Academic detailing is, in and of itself, an evidence-based, multi-component implementation strategy that includes needs assessments, education, and focused training and has contributed to successful implementation efforts across a number of domains [24, 28,29,30,31]. In VA, academic detailing is typically conducted by clinical pharmacists, who deliver face-to-face, 1-on-1 training. Our findings support the notion that academic detailing is effective as a pre-implementation, or “preparation” strategy, as we found that academic detailing measured prior to implementation was associated with the outcome. Engaging in academic detailing may also reflect other site-level contextual factors such as engagement, enthusiasm, or leadership support.

While relatively few implementation strategies were important in predicting case review completion rates, two respondent characteristics were significantly associated with this outcome. These included younger age of the respondent and fewer years in one’s current VA role. This finding is consistent with other studies that have also found that younger and more recently trained clinicians are more likely to be early adopters of innovations [32]. Alternatively, this association could be the result of an unmeasured confounder. Future work should assess how team member characteristics relate to use of specific implementation strategies and implementation success.

We notably found that randomization to policy notices requiring additional oversight did not positively influence facility case review completion rates. The two notices differed in that the “additional oversight” notices included language that a site would be required to receive additional oversight and support if it did not reach the 97% case review completion. One possible explanation for why the inclusion of this requirement did not affect implementation or case review completion is that we measured implementation strategies and case review completion rates too soon after the policy notice was released and that all sites needed more time to stand up effective processes to complete the case reviews. A second possibility is that the differences in the policy notices were too minor to make a positive impact. A third possibility is that the requirement of “additional oversight” was in fact detrimental to implementation success. Though this was possible by chance alone, the facilities randomized to the “additional oversight” policy notice were less likely to reach the 97% threshold of case review completion rates than those with the standard policy notice. If points of contact perceived “additional oversight” to be a threat or negative consequence, the effect may have been detrimental, in contrast to a potential notice that included a positive incentive or reward. This is a well-established psychological phenomenon wherein positive reinforcement leads to increased intrinsic motivation more so than negative consequences [33]. Another complicating factor is that VA has an additional layer of regional management between health care systems and the national office that would provide additional oversight per the notice. Although regional management was made aware that the notice only required oversight and action planning at a randomized subset of sites per the notice, some chose to globally implement their own oversight and action planning requirements across sites in their region, per their standard practices. Variable regional oversight practices may have minimized effects of centralized national oversight.

We found several key similarities and differences when we compared our results from this survey to previously published results from a similar survey conducted in the context of VA’s national HCV elimination program. In both implementation efforts, respondents endorsed a similarly high number of strategies, with a median number of 23 vs. 24 implementation strategies in the STORM and HCV efforts, respectively. We also identified strategies associated with more successful implementation that were common to both efforts, including training/education and tailoring to the context [16, 17]. Tailoring the survey for the STORM implementation effort allowed us to reduce the number of strategies that we assessed (from 73 to 68). That we found useful information from both assessments of stakeholder-reported implementation strategies in vastly different implementation efforts speaks to the value of this approach to assessing implementation strategies. An overarching goal in the field of implementation science is to systematize how investigators and healthcare systems choose implementation strategies to address implementation barriers. Moving forward, we hope to develop data around which implementation strategies function best in which settings to address which implementation goals. Then, we can test the application of these strategies using randomized experiments or other large, naturalistic, pragmatic operational initiatives like STORM.

There were also key differences between the national opioid case review and HCV treatment implementation efforts. First, few facilities reported engaging patients in efforts to implement case reviews for very high-risk patients, since this effort was focused on a provider activity. This is in contrast to the HCV effort, where patient-facing strategies were universally used and associated with increased treatment [16, 17]. In the HCV effort, the characteristics of the individual respondents were not associated with the outcomes of interest, while the point of contact demographic characteristics appeared to be important to opioid-related case review completion. These key differences may be explained by the differences between the implementation “ask” in these two efforts. The case review effort could be completed by a single team, since the average facility had 18 very high-risk patients. In contrast, in HCV treatment implementation, facilities were asked to treat hundreds of patients, which may have required coordinated implementation efforts across a range of stakeholders. This demonstrates the importance of understanding the context and the “ask,” or complexity, of an implementation effort when determining which strategies to use and how to interpret findings. Stemming from Rogers’ work on diffusion of innovations and incorporated into leading implementation frameworks like the Consolidated Framework for Implementation Research, there is good evidence to show that the complexity of the innovation impacts implementation success [34, 35]. Complexity also likely impacts the choice of implementation strategies. Measuring and evaluating the linkages between complexity, innovation implementation, and implementation strategies could inform how to choose implementation strategies based on the complexity of the innovation so as to improve implementation outcomes.

We acknowledge several notable limitations of this study. First, implementation strategies were reported by a single individual from each facility and may not have reflected the full scope of what was being done at the facility level. However, we have previously found high interrater reliability between multiple respondents from the same facility in a similar study using a similar survey [17]. An additional limitation is the potential for contamination across the randomization arms and unblinding to the process of randomization. While facilities were not made aware that two different policy notices were assigned randomly, it is possible that providers could have communicated and become aware of the randomization process, which may have altered the implementation strategies that were chosen. While the survey provides information about whether facilities used a wide range of strategies, it does not address other key elements about the strategies (e.g., intensity, mechanism, fidelity to the strategy), so a key next step is to collect data that better elucidates how the strategies are used. Another limitation is that respondents may not cognitively distinguish the strategies employed at a particular facility. In addition, they may not appreciate how the implemented strategies were defined or whether the strategies and clusters were useful. We tried to mitigate this limitation by engaging stakeholders in survey development, adding examples relevant to the clinical domain, and clustering the strategies allowed respondents to reach the end of the survey. Nevertheless, future studies could examine whether there were respondent issues and whether our strategies to overcome this limitation are effective. Finally, we conducted multiple statistical tests using a relatively small number of facilities, which allows us to generate hypotheses but not draw definitive conclusions from the findings. Despite these limitations, this was a national, randomized program evaluation with excellent response rates, and our findings add to a growing body of literature assessing a wide variety of implementation strategies across large-scale implementation efforts.

Conclusions

In conclusion, collecting implementation strategy data in this national randomized program evaluation allowed us to track and compare implementation activities across randomization arms and assess their associations with a meaningful implementation metric: case review completion rates. These findings add to the growing body of literature addressing the measurement and interpretation of implementation strategy data. We found that facilities with more pre-implementation academic detailing and younger implementation points of contact who were more recent to their current role, in addition to facilities using education, adaptation, and tailoring strategies, were more successful in implementing case reviews on very high-risk patients.

Availability of data and materials

All data generated or analyzed during this study are included in this published article.

Abbreviations

ERIC:

Expert Recommendations for Implementing Change

FY18:

Fiscal Year 2018

HCV:

Hepatitis C virus

IIR:

Incident rate ratios

OMHSP:

Office of Mental Health and Suicide Prevention

STORM:

Stratification Tool for Opioid Risk Mitigation

VA:

Veterans Affairs

VHA:

Veterans Health Administration

References

  1. Rudd RA, Aleshire N, Zibbell JE, Matthew GR. Increases in drug and opioid overdose deaths-United States, 2000-2014. Am J Transplant. 2016;16(4):1323–7.

    Article  Google Scholar 

  2. Von Korff M, Scher AI, Helmick C, et al. United States national pain strategy for population research: concepts, definitions, and pilot data. J Pain. 2016;17(10):1068–80.

    Article  Google Scholar 

  3. Services DoHaH. National Pain Strategy: a comprehensive population health strategy for pain. Department of Health and Human Services https://iprcc.nih.gov/docs/HHSNational_Pain_Strategy.pdf. Accessed 30 Jan 30 2019.

  4. Gellad WF, Thorpe JM, Zhao X, et al. Impact of dual use of Department of Veterans Affairs and Medicare Part D drug benefits on potentially unsafe opioid use. Am J Public Health. 2018;108(2):248–55.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Lin LA, Bohnert ASB, Kerns RD, Clay MA, Ganoczy D, Ilgen MA. Impact of the opioid safety initiative on opioid-related prescribing in veterans. Pain. 2017;158(5):833–9.

    Article  PubMed  Google Scholar 

  6. Oliva EM, Bowe T, Tavakoli S, et al. Development and applications of the veterans health Administration’s stratification tool for opioid risk mitigation (STORM) to improve opioid safety and prevent overdose and suicide. Psychol Serv. 2017;14(1):34–49.

    Article  PubMed  Google Scholar 

  7. Gellad WF, Good CB, Shulkin DJ. Addressing the opioid epidemic in the United States: lessons from the Department of Veterans Affairs. JAMA Intern Med. 2017;177(5):611–2.

    Article  PubMed  Google Scholar 

  8. Hadlandsmyth K, Mosher H, Vander Weg MW, Lund BC. Decline in prescription opioids attributable to decreases in long-term use: a retrospective study in the veterans health Administration 2010-2016. J Gen Intern Med. 2018;33(6):818–24.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Minegishi T, Frakt AB, Garrido MM, et al. Randomized program evaluation of the veterans health Administration stratification tool for opioid risk mitigation (STORM): a research and clinical operations partnership to examine effectiveness. Subst Abus. 2019;40(1):14–9.

    Article  PubMed  Google Scholar 

  10. VHA NOTICE 2019-15: Conduct of data-based case reviews of opioid-exposed patients with risk factors. https://www.va.gov/vhapublications/ViewPublication.asp?pub_ID=8420. Accessed 30 Dec 2019.

  11. Chinman M, Gellad WF, McCarthy S, et al. Protocol for evaluating the nationwide implementation of the VA stratification tool for opioid risk management (STORM). Implement Sci. 2019;14(1):5.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Waltz TJ, Powell BJ, Chinman MJ, et al. Expert recommendations for implementing change (ERIC): protocol for a mixed methods study. Implement Sci. 2014;9:39.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Waltz TJ, Powell BJ, Matthieu MM, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the expert recommendations for implementing change (ERIC) study. Implement Sci. 2015;10:109.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the expert recommendations for implementing change (ERIC) project. Implement Sci. 2015;10:21.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Rogal SS, Yakovchenko V, Waltz TJ, et al. Longitudinal assessment of the association between implementation strategy use and the uptake of hepatitis C treatment: year 2. Implement Sci. 2019;14(1):36.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Rogal SS, Yakovchenko V, Waltz TJ, et al. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample. Implement Sci. 2017;12(1):60.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Glasgow JM, Yano EM, Kaboli PJ. Impacts of organizational context on quality improvement. Am J Med Qual. 2013;28(3):196–205.

    Article  PubMed  Google Scholar 

  19. Administration VH. VHA Facility Complexity Model 2015. http://opes.vssc.med.va.gov/FacilityComplexityLevels/Pages/default.aspx. Published 2015. Accessed 30 Dec 2016.

  20. Phibbs CS, Cowgill EH, Fan AY. Guide to the PSSG enrollee file. VA Palo Alto, Health Economics Resource Center.: Menlo Park, CA; 2015.

  21. Office of Rural Health Webpage. U.S. Department of Veterans Affairs. https://www.ruralhealth.va.gov/aboutus/ruralvets.asp. Published 2020. Accessed April 22, 2020.

  22. VA All Employee Survey. National Center for Organization Development. https://www.va.gov/NCOD/VAworkforcesurveys.asp. Published 2018. Accessed 6 Nov 2019.

  23. Harris AH, Bowe T, Hagedorn H, et al. Multifaceted academic detailing program to increase pharmacotherapy for alcohol use disorder: interrupted time series evaluation of effectiveness. Addict Sci Clin Pract. 2016;11(1):15.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Lubelchek RJ, Hotton AL, Taussig D, Amarathithada D, Gonzalez M. Scaling up routine HIV testing at specialty clinics: assessing the effectiveness of an academic detailing approach. J Acquir Immune Defic Syndr. 2013;64(Suppl 1):S14–9.

    Article  PubMed  Google Scholar 

  25. Midboe AM, Wu J, Erhardt T, et al. Academic detailing to improve opioid safety: implementation lessons from a qualitative evaluation. Pain Med. 2018;19(suppl_1):S46–53.

    Article  PubMed  Google Scholar 

  26. Baker R, Camosso-Stefinovic J, Gillies C, et al. Tailored interventions to address determinants of practice. Cochrane Database Syst Rev. 2015;4:CD005470.

    Google Scholar 

  27. Bosch M, van der Weijden T, Wensing M, Grol R. Tailoring quality improvement interventions to identified barriers: a multiple case analysis. J Eval Clin Pract. 2007;13(2):161–8.

    Article  PubMed  Google Scholar 

  28. Chhina HK, Bhole VM, Goldsmith C, Hall W, Kaczorowski J, Lacaille D. Effectiveness of academic detailing to optimize medication prescribing behaviour of family physicians. J Pharm Pharm Sci. 2013;16(4):511–29.

    Article  PubMed  Google Scholar 

  29. Goetz MB, Hoang T, Bowman C, et al. A system-wide intervention to improve HIV testing in the veterans health Administration. J Gen Intern Med. 2008;23(8):1200–7.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Bounthavong M, Lau MK, Popish SJ, et al. Impact of academic detailing on benzodiazepine use among veterans with posttraumatic stress disorder. Subst Abus. 2020;41(1):101–9.

    Article  PubMed  Google Scholar 

  31. Bounthavong M, Devine EB, Christopher MLD, Harvey MA, Veenstra DL, Basu A. Implementation evaluation of academic detailing on naloxone prescribing trends at the United States veterans health Administration. Health Serv Res. 2019;54(5):1055–64.

    Article  PubMed  Google Scholar 

  32. Velez MR, Auseon AJ. Defining characteristics of early adopters of multimodality cardiovascular imaging. Echocardiography. 2014;31(7):802–8.

    PubMed  Google Scholar 

  33. Woolley K, Fishbach A. It’s about time: earlier rewards increase intrinsic motivation. J Pers Soc Psychol. 2018;114(6):877–90.

    Article  PubMed  Google Scholar 

  34. Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR). Implement Sci. 2013;8:51.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Valente TW, Rogers EM. The origins and development of the diffusion of innovations paradigm as an example of scientific growth. Sci Commun. 1995;16(3):242–73.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We also acknowledge that this work would not be possible without the cooperation and support of our partners in the Office of Mental Health and Suicide Prevention and in the HSR&D-funded Partnered Evidence-Based Policy Resource Center. The contents of this paper are solely from the authors and do not represent the views of the Department of Veterans Affairs or the US Government.

Funding

This work was funded by a grant from the Department of Veterans Affairs Health Services Research and Development Service: The STORM Implementation Program Evaluation (SDR 16-193).

Author information

Authors and Affiliations

Authors

Contributions

HZ, MM, LH, WG, GM, EO, EL, JH, and JT helped with the acquisition of data. HZ, MM, and SR analyzed the data. SR drafted the manuscript. All authors worked on study conception and design, interpretation of data, and critical review and approval of the final manuscript.

Corresponding author

Correspondence to Shari S. Rogal.

Ethics declarations

Ethics approval and consent to participate

The VA Pittsburgh Healthcare System approved this research study.

Consent for publication

Not applicable

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Implementation strategy survey.

Additional file 2.

Characteristics of participating vs. non-participating facilities.

Additional file 3.

Use of implementation strategies: overall and due to the policy notice, by randomization arm (n=89).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rogal, S.S., Chinman, M., Gellad, W.F. et al. Tracking implementation strategies in the randomized rollout of a Veterans Affairs national opioid risk management initiative. Implementation Sci 15, 48 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s13012-020-01005-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13012-020-01005-y

Keywords