Skip to main content
  • Systematic review
  • Open access
  • Published:

Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria

Abstract

Background

High-quality measurement is critical to advancing knowledge in any field. New fields, such as implementation science, are often beset with measurement gaps and poor quality instruments, a weakness that can be more easily addressed in light of systematic review findings. Although several reviews of quantitative instruments used in implementation science have been published, no studies have focused on instruments that measure implementation outcomes. Proctor and colleagues established a core set of implementation outcomes including: acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, sustainability (Adm Policy Ment Health Ment Health Serv Res 36:24ā€“34, 2009). The Society for Implementation Research Collaboration (SIRC) Instrument Review Project employed an enhanced systematic review methodology (Implement Sci 2: 2015) to identify quantitative instruments of implementation outcomes relevant to mental or behavioral health settings.

Methods

Full details of the enhanced systematic review methodology are available (Implement Sci 2: 2015). To increase the feasibility of the review, and consistent with the scope of SIRC, only instruments that were applicable to mental or behavioral health were included. The review, synthesis, and evaluation included the following: (1) a search protocol for the literature review of constructs; (2) the literature review of instruments using Web of Science and PsycINFO; and (3) data extraction and instrument quality ratings to inform knowledge synthesis. Our evidence-based assessment rating criteria quantified fundamental psychometric properties as well as a crude measure of usability. Two independent raters applied the evidence-based assessment rating criteria to each instrument to generate a quality profile.

Results

We identified 104 instruments across eight constructs, with nearly half (nā€‰=ā€‰50) assessing acceptability and 19 identified for adoption, with all other implementation outcomes revealing fewer than 10 instruments. Only one instrument demonstrated at least minimal evidence for psychometric strength on all six of the evidence-based assessment criteria. The majority of instruments had no information regarding responsiveness or predictive validity.

Conclusions

Implementation outcomes instrumentation is underdeveloped with respect to both the sheer number of available instruments and the psychometric quality of existing instruments. Until psychometric strength is established, the field will struggle to identify which implementation strategies work best, for which organizations, and under what conditions.

Peer Review reports

Background

Many new scientific fields, like implementation science, are beset with instrumentation issues such as inclusion of oversupply of single-use or adapted instruments that are incommensurable; reliance on instruments with uncertain reliability and validity; and scarcity of instruments for theoretically important constructs [1]. Systematic instrument reviews can help emerging fields address instrumentation issues by utilizing evidence-based, psychometric standards to identify promising instruments needing further testing and areas of needed development for key constructs. While several recent reviews in implementation science have been published [2ā€“4], none have focused on implementation outcomes instruments. This is a significant limitation because implementation outcomes are perhaps the most critical factor in implementation science as they define what we seek to explain in research and what we seek to improve in practice.

Proctor and colleagues [5] articulated the following core set of seven implementation outcomes: acceptability, feasibility, uptake, penetration, cost, fidelity, and sustainability. Their conceptual model was updated in 2011 to include appropriateness and rename uptake as adoption [6]. The identification and concrete operationalization of implementation outcomes, separate from service and client outcomes, has clearly shaped the field, earning a total of 479 citations in 5Ā years and potentially spurring a spike in associated instrument development with 34.1Ā % of extant implementation outcome instruments developed since 2009 [7]. Yet, the quality of existing implementation outcome instruments remains unclear and perhaps one of the most critical gaps in the literature. Establishing the psychometric properties of implementation outcome instruments is a necessary step to ensuring that predictors, moderators, and mediators of implementation are identified and that comparative effectiveness of implementation strategies is measureable [8].

The primary objective of this review is to assess the reliability, validity, and usability of 104 instruments of implementation outcomes for mental health identified through an enhanced systematic review of the literature performed by and consistent with the goals of the Society for Implementation Research Collaboration (SIRC) Instrument Review Project (IRP) team [7]. Consistent with the mission of SIRC, the review of instruments focused on those directly applicable to mental healthcare and behavioral healthcare settings. In this study, we used a modified version of the evidence-based assessment (EBA) rating criteria developed by Hunsley and Mash [9]. The primary modifications to the established EBA included increasing the number of anchors (from 3 to 5) to promote variability of the ratings and excluding some criteria that are not broadly applicable to the majority of instrument types (e.g., test-retest). The EBA rating criteria were also informed by the work of Terwee and colleagues [10]. The final version of the EBA criteria also incorporated feedback from implementation scientists identified through SIRC; more details on the EBA development process can be found in the projectā€™s methodology paper [7]. Using this revised version of the EBA rating criteria, we assessed the psychometric properties of both published and unpublished instruments for the eight implementation outcomes outlined above [6]. Results highlight instruments that merit consideration for widespread use in addition to areas for recommended development and testing.

Methods

The enhanced systematic review methodology for the SIRC Instrument Review Project is described in detail elsewhere [7] and a summary is depicted in Fig.Ā 1. Described below is the methodology as applied to the assessment of constructs in the Implementation Outcomes Framework (IOF).

Fig. 1
figure 1

Rating process methodology

Identification of instruments and related published material

For each implementation outcome construct, we performed systematic literature searches of PsycINFO and ISI Web of Science, two widely used bibliographic databases. Our search strings included the construct name (e.g., adoption) and synonymous terms (e.g., uptake, utilization; taken from Proctor and colleagues [6]), as well as terms for implementation (e.g., dissemination, quality improvement), innovation (e.g., evidence-based treatment), and instrument (e.g., measure) (see Additional file 1 for search string examples). We limited our search to articles that were written in English, peer-reviewed, published between 1985 and 2012, and measured implementation outcomes quantitatively.

To keep the project manageable and aligned with the projectā€™s funding agencyā€™s priorities, we added the term ā€œmental healthā€ to our search strings. Importantly, the search for fidelity instruments was limited to instruments that included either assessments of implementation interventions or instruments that could be applied to any evidence-based practice. This design choice was made because fidelity is one of the only implementation outcomes that has been subjected to extensive reviews, typically focused on specific evidence-based practices (e.g., fidelity to Cognitive Behavior Therapy) [11]. Moreover, because fidelity instruments are typically developed to evaluate specific interventions, their cross-study relevance is limited and thus not a priority for the goals of the SIRC IRP.

A trained research specialist reviewed the titles and abstracts to exclude duplicates and irrelevant articles. Surviving articles were subjected to full-text review, with special attention paid to the method section. The reference section was also scrutinized for articles that might yield additional instruments. A second trained research specialist replicated the construct-focused search and review to ensure consistency and completeness.

To increase the comprehensiveness and exhaustiveness of the search, we employed a respondent-driven, non-probabilistic sampling approach to identify key informants who could help us identify additional instruments in the peer-reviewed literature, the grey literature, or in the later stages of development and testing. This approach, which leverages the informational power of social networks, can augment traditional search methods in situations in which the searched-for items (i.e., instruments) are not clearly and consistently indexed with standard terms in bibliographic databases. We also searched websites and electronic newsletters for additional instruments via search engines such as Google Scholar.

Subsequently, for each instrument, a trained research specialist searched the bibliographic databases for all related published materials, again performing a title and abstract review to ensure the articleā€™s relevance to the project (see TableĀ 1). Inclusion criteria required that articles provided original data about the instrument. All retained published material pertinent to an instrument was then compiled into an instrument packet (i.e., a single PDF). If no related published materials were found, efforts were made to contact the instrumentā€™s author(s). Additional instruments identified in the instrument-focused search and reviews were subjected to the abovementioned strategies for inclusion in the repository. A second research specialist replicated the instrument-focused search and review process to ensure consistency and completeness.

Table 1 Literature search strategies

Abstraction of relevant evidence-based assessment information

To facilitate efficiency and consistency in the evidence-based assessment of instrumentsā€™ psychometric and pragmatic properties, a team of trained research specialists electronically highlighted and tagged, with searchable key phrases, information within each packet pertinent to six evidence-based assessment (EBA) rating criteria: reliability, structural validity, predictive validity, norms, responsiveness, and usability. The definitions and details of these EBA rating criteria can be found in Additional file 2. The process by which these EBA rating criteria were developed is described elsewhere [7]. Research specialists were trained in the EBA criteria. They highlighted and tagged pilot packets according to EBA rating criteria that were then checked by a project lead and then received additional support from project leads as needed. When the project leads ascertained that the research specialists were performing effectively, the research specialists were permitted to highlight and tag packets independently. Specialists that were ready to double-check the work of others were first subjected to a test (i.e., a complicated packet with errors of omission and commission) to assess their skill. Upon achieving 90Ā % accuracy on the test, research specialists were permitted to double-highlight packets and monitor the work of more inexperienced specialists.

Analysis and presentation

Each packet was then sent to two independent raters who were either implementation science experts or advanced research specialists with the guidance of one of the lead authors. As depicted in the guidelines found in Additional file 2, each criterion would receive a rating of ā€œnoneā€ (0), ā€œminimal/emergingā€ (1), ā€œadequateā€ (2), ā€œgoodā€ (3), or ā€œexcellentā€ (4). Raters used the conservative ā€œworst score countsā€ methodology [12]. If an instrument exhibited ā€œminimalā€ level of reliability in one study and a ā€œgoodā€ level of reliability in another, the rater assigned a ā€œminimalā€ rating for this criterion. When the raters differed by one point in their ratings, their ratings were averaged unless a clear mistake or misunderstanding occurred. When the raters differed by more than one point, a third expert rated the instrument and adjudicated the discrepancy with the two raters.

Simple statistics (e.g., frequencies) were calculated to describe the availability of information about the psychometric and pragmatic properties of the instruments identified in our review, both overall and by IOF construct. The same procedures were employed to describe the psychometric and pragmatic quality (i.e., the EBA ratings) of the instruments. A total score for each instrument was calculated by summing the EBA ratings for the instrument. Finally bar charts were created to facilitate head-to-head comparisons of instruments by IOF construct. These bar charts allow for visual determination of overall instrument quality, as indicated by the total length of the bars. Simultaneously, the shading allows for a within-criterion comparison of the strength of each instrument with respect to specific criteria.

Results

Instrument search results

Traditional systematic review methods did not prove useful for identifying articles with implementation outcomes instruments. While searches of electronic bibliographic databases yielded 534 unique (non-duplicate) articles (see Fig.Ā 2), only 16 articles were retained following our review process as they had relevant instruments (construct PRISMA flowcharts in Figs.Ā 3, 4, 5, 6, 7, 8, 9, and 10). By contrast, respondent-driven sampling emails and targeted newsletter reviews proved far more useful for instrument identification with a total of 88 unique (non-duplicate) instruments obtained using these methods.

Fig. 2
figure 2

PRISMA enhanced systematic review flowchart for all constructs

Fig. 3
figure 3

Acceptability PRISMA enhanced systematic review flowchart

Fig. 4
figure 4

Adoption PRISMA enhanced systematic review flowchart

Fig. 5
figure 5

Appropriateness PRISMA enhanced systematic review flowchart

Fig. 6
figure 6

Cost PRISMA enhanced systematic review flowchart

Fig. 7
figure 7

Feasibility PRISMA enhanced systematic review flowchart

Fig. 8
figure 8

Fidelity PRISMA enhanced systematic review flowchart

Fig. 9
figure 9

Penetration PRISMA enhanced systematic review flowchart

Fig. 10
figure 10

Sustainability PRISMA enhanced systematic review flowchart

Evidence-based assessment rating

Overall, availability of information on the psychometric and pragmatic properties across all 104 instruments was limited and variable. Only one instrument identified in our reviewā€”the Levels of Institutionalization Scales for Health Promotion Programs [13]ā€”had information available for all six EBA rating criteria. Of the remaining 103 instruments, 4Ā % were missing information for only one rating criteria, 20Ā % for only two rating criteria, 29Ā % for three rating criteria, and 46Ā % for four or more rating criteria. Said differently, less than half of the identified instruments revealed any information pertaining to four of the six EBA criteria. In terms of individual rating criteria, information on reliability was not available for 51Ā % of instruments, for structural validity 74Ā %, for predictive validity 82Ā %, for norms 28Ā %, and for responsiveness 96Ā % (TableĀ 2). All instruments had information available for usability, as indicated by a simple item count indicating instrument length.

Table 2 Number and percentage of instruments with a rating of 1 or more for each construct

Overall, the psychometric and pragmatic qualities of the instruments identified in our review were modest. The total scores for the six EBA rating criteria ranged from two to 19.5 (Additional file 3); the highest possible total score was 24 with a median total score of eight and a modal total score of seven. In terms of individual rating criteria, the percentage of instruments rated ā€œgoodā€ or ā€œexcellentā€ for reliability was 47Ā %, for structural validity 17Ā %, for predictive validity 9Ā %, for norms 53Ā %, for responsiveness 2Ā %, and for usability 89Ā %. Further summary statistics can be found in TablesĀ 2, 3, and 4. Graphs depicting the results by construct can be found in Fig.Ā 11 and Additional file 4: Figures S12ā€“S19. FigureĀ 11 depicts the Evidence-Based Assessment Rating Profile (i.e., head-to-head comparison graph) for the adoption construct as an example. All information collected through the review and rating process is available to members of SIRC on our website.Footnote 1

Table 3 Summary statistics of all instrument ratings, including scores of ā€œ0ā€
Table 4 Summary statistics of all instrument ratings, non-zero ratings only
Fig. 11
figure 11

Adoption head-to-head comparison graph

Acceptability (Nā€‰=ā€‰50) is the perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory [6]. Nearly half of the instruments identified in our review measured the acceptability of the intervention (Nā€‰=ā€‰41) or the implementation process (Nā€‰=ā€‰9) (see Additional file 3: Tables S5 and S6). Information on reliability was available for 72Ā %, structural validity for 26Ā %, predictive validity for 22Ā %, norms for 92Ā %, responsiveness for 2Ā %, and usability for 100Ā %. Among those instruments for which psychometric information was available (i.e., those with non-zero ratings), the median rating was ā€œ4ā€”excellentā€ for reliability, ā€œ4ā€”excellentā€ for structural validity, ā€œ2ā€”adequateā€ for predictive validity, ā€œ4ā€”excellentā€ for norms, and ā€œ3ā€”goodā€ for usability. Only one instrument received a non-zero score on responsiveness, with a score of ā€œ4ā€”excellentā€. The Pre-Referral Intervention Team Inventory [14] had the highest overall rating among intervention acceptability instruments (total scoreā€‰=ā€‰18.5), with ā€œ4ā€”excellentā€ ratings for reliability, structural validity, and norms; and ā€œ3ā€”goodā€ ratings for predictive validity and usability. The Practitionerā€™s Attitudes toward Treatment Manuals [15] had the highest overall rating among implementation process acceptability instruments (total scoreā€‰=ā€‰17), with ā€œ4ā€”excellentā€ ratings for reliability, structural validity, and norms; ā€œ3ā€”goodā€ rating for usability; and ā€œ2ā€”adequateā€ rating for predictive validity.

Adoption (Nā€‰=ā€‰19) is the intention, initial decision, or action to try or employ an innovation or evidence-based practice [6]. About 20Ā % of the instruments identified in our review measured adoption (see Additional file 3: Table S7). Information on reliability was available for 42Ā %, structural validity for 36Ā %, predictive validity for 26Ā %, norms for 53Ā %, responsiveness for 0Ā %, and usability for 100Ā %. Among those instruments with accessible psychometric information, the median rating was ā€œ4ā€”excellentā€ for reliability, ā€œ2ā€”adequateā€ for structural validity, ā€œ3ā€”goodā€ for predictive validity, ā€œ4ā€”excellentā€ for norms, and ā€œ3ā€”goodā€ for usability. All adoption instruments received a responsiveness score of ā€œ0ā€”no evidenceā€. The Adoption of Information Technology Innovation [16] scale had the highest overall rating (score = 14), with ā€œ4ā€”excellentā€ ratings for structural validity and norms and ā€œ3ā€”goodā€ ratings for reliability and usability; however, no information about predictive validity is available. The Research Utilization Questionnaire [17] was also notable for its high overall rating (total score = 13.5).

Appropriateness (Nā€‰=ā€‰7) is the perceived fit, relevance, or compatibility of the innovation or evidence based practice for a given practice setting, provider, or consumer; and/or perceived fit of the innovation to address a particular issue or problem [6]. Seven percent of the instruments identified in our review measured appropriateness (see Additional file 3: Table S8). Information on reliability was available for 29Ā %, structural validity for 29Ā %, predictive validity for 14Ā %, norms for 43Ā %, responsiveness for 14Ā %, and usability for 100Ā %. Among those instruments for which psychometric information was available, the median rating was ā€œ3.5ā€”good-excellentā€ for reliability, ā€œ0.5ā€”none-minimalā€ for structural validity, ā€œ3ā€”goodā€ for norms, and ā€œ3ā€”goodā€ for usability. Only one instrument provided a non-zero score for predictive validity, with a reported score of ā€œ0.5ā€”none-minimalā€. One instrument provided a non-zero score for responsiveness, with a score of ā€œ4ā€”excellentā€. The Parenting Strategies Questionnaire [18] had the highest overall rating (total score = 14), with ā€œ4ā€”excellentā€ ratings for reliability, responsiveness, and usability, and ā€œ2ā€”adequateā€ rating for norms; however, no information is available about structural or predictive validity.

Cost (Nā€‰=ā€‰8) is the financial impact of an implementation effort [6]. Eight percent of the instruments identified in our review measured cost (see Additional file 3: Table S9). Cost is not typically treated as a latent construct; consequently, information was not available for reliability or structural validity. Information was also not available for predictive validity or responsiveness on any of the identified cost instruments. Information on norms and usability was available for 75Ā % of cost instruments. The two instruments with the highest overall ratings were The Drug Abuse Treatment Cost Analysis Program [19] (total score = 8) and the Utilization and Cost Questionnaire [20] (total score = 8).

Feasibility (Nā€‰=ā€‰8)Ā is the extent to which a new treatment, or an innovation, can be successfully used or carried out within a given agency or setting [6]. Eight percent of the instruments identified in our review measured feasibility (see Additional file 3: Table S10). Information on reliability was available for 12Ā %, structural validity for 12Ā %, predictive validity for 0Ā %, norms for 50Ā %, responsiveness for 0Ā %, and usability for 100Ā %. Among those instruments for which psychometric information was available, the median rating was ā€œ2.5ā€”adequate-goodā€ for norms and ā€œ3ā€”goodā€ for usability. All instruments received scores of ā€œ0ā€”no evidenceā€ for predictive validity and responsiveness. There was one non-zero score for reliability (score of ā€œ3ā€”goodā€) and one non-zero score for structural validity (score of ā€œ4ā€”excellentā€). The Measure of Disseminability [21] had the highest overall rating (total score = 10), with ā€œ4ā€”excellentā€ rating for structural validity and ā€œ3ā€”goodā€ ratings for reliability and usability; however, no information is available about predictive validity, norms, or responsiveness.

Fidelity (Nā€‰=ā€‰0) is the degree to which an intervention was implemented as it was prescribed in the original protocol or as it was intended by the program developers [6]. No fidelity instruments were identified in our review that included either assessments of implementation interventions (e.g., instruments that measure frequency and structure of an evidence-based practice training) or instruments that could be applied to any evidence-based practice. Fidelity instruments for specific clinical interventions were not considered for inclusion in the repository at this time.

Penetration (Nā€‰=ā€‰4) is the integration of a practice within a service setting and its subsystems [6]. Four percent of the instruments identified in our review measured penetration. Information on reliability was available for 25Ā %, structural validity for 25Ā %, predictive validity for 25Ā %, norms for 100Ā %, responsiveness for 0Ā %, and usability for 100Ā %. All instruments but one received scores of ā€œ0ā€”no evidenceā€ for internal consistency, structural validity, predictive validity, and responsiveness (see Additional file 3: Table S11), meaning that simple statistic calculations could only be completed for norms and usability. Information on norms and usability was available for all four instruments, with the median rating of ā€œ3ā€”goodā€ for norms and ā€œ4ā€”excellentā€ for usability. The Levels of Institutionalization Scale [13] had the highest overall rating (total score = 19.5), with ā€œ4ā€”excellentā€ ratings for reliability, structural validity, and norms; ā€œ3ā€”goodā€ ratings for predictive validity and usability; and ā€œ1ā€”emergingā€ to ā€œ2ā€”adequateā€ rating for responsiveness.

Sustainability (Nā€‰=ā€‰8) is the extent to which a newly implemented treatment is maintained or institutionalized within a service settingā€™s ongoing, stable operations [6]. Eight percent of the instruments identified in our review measured sustainability (see Additional file 3: Table S12). Information on reliability was available for 38Ā %, structural validity for 38Ā %, predictive validity for 13Ā %, norms for 25Ā %, responsiveness for 13Ā %, and usability for 100Ā %. Among those instruments for which psychometric information was available, the median rating was ā€œ3ā€”goodā€ for reliability, ā€œ2ā€”adequateā€ for structural validity, ā€œ4ā€”excellentā€ for norms, and ā€œ3ā€”goodā€ for usability. One instrument received a non-zero score for predictive validity (score of ā€œ1ā€”minimal/emergingā€) and one instrument received a non-zero score for responsiveness (score of ā€œ1ā€”minimal/emergingā€). The School-Wide Universal Behavior Sustainability Index-School Teams scale [22] had the highest overall rating (total score = 16), with ā€œ4ā€”excellentā€ rating for reliability, structural validity, and norms; ā€œ3ā€”goodā€ rating for usability; and ā€œ1ā€”emergingā€ rating for predictive validity. No instruments yielded information relevant for responsiveness.

Discussion

The state of instrumentation for implementation outcomes

The findings from this review indicate that there is an uneven distribution of instruments across implementation outcomes for mental and behavioral health. One hypothesis for this is that the number and quality of instruments hinges upon the history and degree of theory and published research available for a particular construct. Indeed, there was a significant positive correlation with the published literature available for a particular outcome and the instrument quality rating (rā€‰=ā€‰0.439, pā€‰<ā€‰.001; see Fig.Ā 12). For instance, there is a longstanding focus on treatment acceptability in both the theoretical and empirical literature, thus it is unsurprising that acceptability (of the intervention) is the most densely populated implementation outcome with respect to instrumentation. However, sustainability is a relatively new construct, at least with respect to evidence-based practices, and accordingly few sustainability instruments exist.

Fig. 12
figure 12

Packet size and total EBA rating score

In addition, some implementation outcomes lend themselves to unique forms of quantitative instruments that fell outside the scope of this review. For instance, fidelity instruments tend to be specific to an intervention and thus are not relevant to the field, broadly speaking. That is not to say that fidelity instruments that are psychometrically strong do not exist, or that they cannot be applied to multiple studies or contexts. In fact, fidelity instruments may be the most densely populated implementation outcome, perhaps with the highest quality instruments, given the intensity of focus on intervention fidelity in efficacy and effectiveness research, both of which has a much longer history than implementation research. Other outcomes such as cost and penetration do not reflect latent constructs and so these outcomes are best measured through formula-based instruments that cannot be adequately assessed using our EBA criteria. Finally, there remains conceptual ambiguity and overlap among implementation outcomes (e.g., acceptability and appropriateness), which has resulted in some instruments including items that arguably measure different constructs.

In order to advance the field and address the issue of uneven distribution of instruments, we recommend increased emphasis on the underdeveloped outcomes where few instruments exist, notably feasibility, appropriateness, and sustainability. Moreover, as noted by Martinez and colleagues [1], careful domain delineation (developing a nomological network) is critical to properly define the construct and limit ambiguity in instrument development.

The importance, but lack, of psychometric property information

Limited psychometric information was available across the 104 instruments reviewed, with 46Ā % of identified instruments missing information on four or more of the psychometric properties under investigation. This finding is not unique to implementation outcome instruments; previous reviews of implementation science-related instruments similarly found that 48.4Ā % [3] lacked criterion validity and 49.2Ā % [4] lacked evidence of any psychometric validation. It is possible that a reporting problem could explain this finding. For instance, in our study, internal consistency was the most frequently reported psychometric statistic and indeed it is typical for many peer-reviewed journals to require internal consistency reporting. However, it is rarely the case that reporting of other psychometrics is required. Most likely is the case that this finding is a result of few instruments having been subjected to rigorous instrument development and testing processes. There are at least three reasons why this may be the case.

One, instruments are often generated ā€œin-houseā€ and are used just once to suit the specific, immediate needs of a project (of which instrument development is rarely the focus). Two, researchers rely on previously reported psychometric information. Three, researchers do not have the requisite expertise to employ systematic development and testing procedures, such as factor analysis. Consistent with recommendations made by Martinez and colleagues [1], these findings highlight the importance of adopting clear and consistent reporting standards and taking systematic approaches to instrument development, without which the quality of implementation outcomes instrumentation will likely remain poor.

Relatively low psychometric quality of instruments

Between 98Ā % (responsiveness) and 47Ā % (norms) of instruments demonstrated less than ā€œadequateā€ ratings on the psychometric properties included in the EBA criteria. The properties with the poorest ratings were responsiveness, predictive validity, and structural validity. Low quality ratings on these criteria are likely due to the need for large samples, longitudinal designs, and more sophisticated analytic skills. These challenges may be difficult to overcome. However, the field can work together to establish the psychometric properties of existing instruments. Thus, we recommend that instruments with promising psychometric properties on some criteria be prioritized for further psychometric testing rather than focusing solely on new instrument development.

Can instruments be both psychometrically strong and pragmatic?

Finally, usability is a crude metric for characterizing the ā€œpragmaticā€ or practical properties of an instrument [23]. In this initial review of IOF instrumentation, we viewed instrument length as relevant to designing and evaluating implementation initiatives outside of the research enterprise given similar literature on provider-reported barriers to utilizing EBA instruments for client outcomes [9, 24]. However, that the majority of instruments consisting of between 11 and 49 items (a rating reflecting ā€œgoodā€) may actually reflect unfeasible instrument length in a practical implementation context. Our team has prioritized developing a stakeholder-driven operationalization of the ā€œpragmaticā€ construct as it pertains to implementation science instrumentation within the context of an NIMH-funded R01 award. Subsequently, all instruments included in this report will be re-assessed for their pragmatic qualities to determine whether instruments can be both pragmatic and psychometrically strongā€”a necessary balance to advance both the science and practice of implementation.

Implications for searching instruments only in traditional databases

Important to highlight is also the disparity in instruments identified via traditional literature databases and the grey literature, with the former producing only 14Ā % of the instruments identified. The ā€œenhancedā€ nature of the search process proved critical to uncovering the implementation outcomes instrumentation landscape. Although difficult to confirm the primary reason for this observation, at least two possibilities may explain why traditional database searches yielded so few instruments. First, many of the instruments identified were those that may be best described as ā€œin developmentā€ or ā€œsingle use.ā€ That is, these instruments were not developed via gold standard test development procedures and were not intended to be promoted for use beyond one study. Acknowledging that the instruments were not of high quality was one of the most common reasons instrument authors declined providing their instrument for our website. The general poor quality of the state of implementation outcome instrumentation further substantiates this hypothesis. Second, although some literature databases have a search for ā€œmeasuresā€ inclusion criteria, our library scientist indicated that the article tagging, according to these parameters, is likely to be invalid given that it is a much more challenging exercise than simply tagging for explicitly listed keywords. Moreover, it is important to note that article reviews that rely on title and abstract are inappropriate for instrument reviews, given that instrument names or references are likely to be embedded in the article text rather than explicitly in the review title or abstract. To avoid developing redundant instruments and to increase the opportunities for the field to collectively establish the psychometrics of newly developed instruments, it is recommended that implementation scientists upload or share their instruments with our evolving SIRC repository or the Grid-Enabled Measures Project [25].

Limitations

Several limitations are worth noting. First, the methodology employed (described elsewhere; [7]), may be difficult to replicate. That is, the respondent-driven sampling, grey literature, and newsletter reviews generated the majority (85Ā %) of the instruments found in this study. We found this component of the search method is critical to obtain instruments in development and circumvent the need for consistent keyword tagging of databases. However, there are likely other listservs that may have resulted in access to additional instruments that went overlooked based on our prior knowledge networks. Second, our review did not distinguish between nomothetic instrumentation (i.e., instruments in which interpretation of results are based on comparisons to aggregated data for the instrument) and idiographic instrumentation (i.e., individually selected or tailored instruments of variables or functionality that maximize applicability to individuals or context) [26]. Nomothetic measurement approaches provide the benefit of cross-study comparison. However, the preponderance of single use instruments may reflect a valid argument for the need to employ idiographic methods to optimally investigate implementation efforts. The need for nomothetic versus idiographic instrumentation approaches is an empirical question. Third, our review focused on mental health-relevant implementation instrumentation. This may limit the applicability of the results to implementation scientists or stakeholders from fields outside of mental health or behavioral healthcare. However, implementation science is often described as transdisciplinary in nature such that the outcomes relevant to implementation in mental health and behavioral healthcare are likely to remain applicable regardless of discipline. Fourth, although our EBA rating criteria are intended to be broadly applicable and highlight primary dimensions reflective of the strength of instruments, we did not include a comprehensive array of psychometric properties. For instance, we did not include other forms of reliability such as test-retest and inter-rater reliability, nor did we include other forms of validity such as convergent and divergent validity in our rating criteria. The decision to exclude these properties was made during our pilot testing of the EBA criteria [7] in order to keep the rating process manageable (i.e., brief and focused) and to prioritize the fundamental psychometric properties necessary for quality instrumentation.

Future directions

Future research should consider (1) increasing the availability of instruments with promising psychometric properties to further establish their quality and (2) populating the underdeveloped constructs with instruments using guidance from a recent publication [1]. Ultimately, this work may elicit focus on the important areas of implementation outcome instrumentation development. Indeed, this enhanced systematic review led to an NIMH-funded R01 in which we seek to advance implementation science through instrument development and evaluation. We will prioritize instrument development of the acceptability, appropriateness, and feasibility constructs given their important relevance in the field as predictors of adoption [27]. In addition, we will further innovate the EBA rating criteria by developing the usability criterion into a more complex and comprehensive stakeholder-informed pragmatic criteria drawing upon Glasgow and Rileyā€™s work [23].

Conclusions

There is a clear need for coordination of instrumentation development focused on implementation outcomes, as highlighted by our results and similar findings from a related projectā€”the Grid Enabled Measures project led by the National Cancer Institute [25]. Although constructs such as acceptability appear saturated with instruments, the majority of implementation outcomes are underdeveloped, yielding few instruments or those without evidence of psychometric strength. Without high-quality instruments, it will be difficult to determine predictors, moderators, and mediators of implementation success. Careful attention must be paid to systematic development and testing procedures in addition to the necessary development of articulating instrument reporting standards [1].

Notes

  1. Anyone can register to be a SIRC member at societyforimplementationresearchcollaboration.org and thus have access to the repository.

Abbreviations

CFIR:

Consolidated Framework for Implementation Research

DIS:

Dissemination and Implementation Science

EBA:

evidence-based assessment

GEM:

Grid-Enabled Measures Project

IOF:

Implementation Outcomes Framework

IRP:

Instrument Review Project

NIH:

National Institutes of Health

NIMH:

National Institute of Mental Health

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

RA:

research assistant

SIRC:

Formerly known as Seattle Implementation Research Conferences; now Society for Implementation Research Collaboration

SIRC IRP:

SIRC Instrument Review Project

References

  1. Martinez RG, Lewis CC, Weiner BJ. Instrumentation issues in implementation science. Implement Sci. 2014;9:118.

    ArticleĀ  PubMedĀ  PubMed CentralĀ  Google ScholarĀ 

  2. Weiner B, Amick H, Lee S-Y. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008

  3. Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci. 2013;8:22.

    ArticleĀ  PubMedĀ  PubMed CentralĀ  Google ScholarĀ 

  4. Chor KHB, Wisdom JP, Olin S-CS, Hoagwood KE, Horwitz SM. Measures for Predictors of Innovation Adoption. Adm Policy Ment Health Ment Health Serv Res. 2015;42:545-73.

  5. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health Ment Health Serv Res. 2009;36:24ā€“34.

    ArticleĀ  Google ScholarĀ 

  6. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38:65ā€“76.

    ArticleĀ  Google ScholarĀ 

  7. Lewis CC, Stanick CF, Martinez RG, Weiner BJ, Kim M, Barwick M, et al. The society for implementation research collaboration instrument review project: a methodology to promote rigorous evaluation. Implement Sci. 2015;10:2.

    ArticleĀ  PubMedĀ  PubMed CentralĀ  Google ScholarĀ 

  8. Grimshaw J, Eccles M, Thomas R, MacLennan G, Ramsay C, Fraser C, et al. Toward evidence-based quality improvement. J Gen Intern Med. 2006;21:S14ā€“20.

    PubMedĀ  PubMed CentralĀ  Google ScholarĀ 

  9. Hunsley J, Mash EJ. Evidence-based assessment. Annu Rev Clin Psychol. 2007;3:29ā€“51.

    ArticleĀ  PubMedĀ  Google ScholarĀ 

  10. Terwee CB, Bot SD, de Boer MR, van der Windt DA, Knol DL, Dekker J, et al. Quality criteria were proposed for measurement properties of health status questionnaires. J Clin Epidemiol. 2007;60:34ā€“42.

    ArticleĀ  PubMedĀ  Google ScholarĀ 

  11. Muse K, McManus F. A systematic review of methods for assessing competence in cognitiveā€“behavioural therapy. Clin Psychol Rev. 2013;33:484ā€“99.

    ArticleĀ  PubMedĀ  Google ScholarĀ 

  12. Terwee CB, Mokkink LB, Knol DL, Ostelo RW, Bouter LM, de Vet HC. Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist. Qual Life Res. 2012;21:651ā€“7.

    ArticleĀ  PubMedĀ  Google ScholarĀ 

  13. Goodman RM, McLeroy KR, Steckler AB, Hoyle RH. Development of level of institutionalization scales for health promotion programs. Health Educ Behav. 1993;20:161ā€“78.

    ArticleĀ  CASĀ  Google ScholarĀ 

  14. Yetter G. Assessing the acceptability of problem-solving procedures by school teams: preliminary development of the pre-referral intervention team inventory. J Educ Psychol Consult. 2010;20:139ā€“68.

    ArticleĀ  Google ScholarĀ 

  15. Addis ME, Krasnow AD. A national survey of practicing psychologistsā€™ attitudes toward psychotherapy treatment manuals. J Consult Clin Psychol. 2000;68:331.

    ArticleĀ  CASĀ  PubMedĀ  Google ScholarĀ 

  16. Moore GC, Benbasat I. Development of an instrument to measure the perceptions of adopting an information technology innovation. Inf Syst Res. 1991;2:192ā€“222.

    ArticleĀ  Google ScholarĀ 

  17. Champion VL, Leach A. Variables related to research utilization in nursing: an empirical investigation. J Adv Nurs. 1989;14:705ā€“10.

    ArticleĀ  CASĀ  PubMedĀ  Google ScholarĀ 

  18. Whittingham K, Sofronoff K, Sheffield JK. Stepping Stones Triple P: a pilot study to evaluate acceptability of the program by parents of a child diagnosed with an Autism Spectrum Disorder. Res Dev Disabil. 2006;27:364ā€“80.

    ArticleĀ  CASĀ  PubMedĀ  Google ScholarĀ 

  19. French MT, Bradley CJ, Calingaert B, Dennis ML, Karuntzos GT. Cost analysis of training and employment services in methadone treatment. Evol Program Plan. 1994;17:107ā€“20.

    ArticleĀ  Google ScholarĀ 

  20. Kashner TM, Rush AJ, Altshuler KZ. Measuring costs of guideline-driven mental health care: the Texas Medication Algorithm Project. J Ment Health Policy Econ. 1999;2:111ā€“21.

    ArticleĀ  PubMedĀ  Google ScholarĀ 

  21. Trent LR. Development of a Measure of Disseminability (MOD). University of Mississippi; 2010.

  22. McIntosh K, MacKay LD, Hume AE, Doolittle J, Vincent CG, Horner RH, Ervin RA. Development and initial validation of a measure to assess factors related to sustainability of school-wide positive behavior support. J Posit Behav Interv. 2011;13(4):208-18.

  23. Glasgow RE, Riley WT. Pragmatic measures: What they are and why we need them. Am J Prev Med. 2013;45:237ā€“43.

    ArticleĀ  PubMedĀ  Google ScholarĀ 

  24. Jensen-Doss A, Hawley KM. Understanding barriers to evidence-based assessment: clinician attitudes toward standardized assessment tools. J Clin Child Adolesc Psychol. 2010;39:885ā€“96.

    ArticleĀ  PubMedĀ  PubMed CentralĀ  Google ScholarĀ 

  25. Rabin BA, Purcell P, Naveed S, Moser RP, Henton MD, Proctor EK, et al. Advancing the application, quality and harmonization of implementation science measures. Implement Sci. 2012;7:119.

    ArticleĀ  PubMedĀ  PubMed CentralĀ  Google ScholarĀ 

  26. Haynes SN, Mumma GH, Pinson C. Idiographic assessment: Conceptual and psychometric foundations of individualized behavioral assessment. Clin Psychol Rev. 2009;29:179ā€“91.

    ArticleĀ  PubMedĀ  Google ScholarĀ 

  27. Wisdom JP, Chor KHB, Hoagwood KE, Horwitz SM. Innovation adoption: a review of theories and constructs. Adm Policy Ment Health Ment Health Serv Res. 2014;41:480-502.

Download references

Acknowledgements

The preparation of this manuscript was supported, in kind, through the National Institutes of Health R13 award entitled, ā€œDevelopment and Dissemination of Rigorous Methods for Training and Implementation of Evidence-Based Behavioral Health Treatmentsā€ granted to PI: KA Comtois from 2010ā€“2015. Dr. Bryan J. Weinerā€™s time on the project was supported by the following funding: NIH CTSA at UNC UL1TR00083. Research reported in this publication was also supported by the National Institute of Mental Health of the National Institutes of Health under Award Number R01MH106510. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cara C. Lewis.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authorsā€™ contributions

CCL, CFS, and BJW are project Co-PIs who conceptualized this enhanced systematic review methodology, developed the specific search terms and parameters, and oversee all project work. CCL and CS managed the undergraduate research assistants working at Indiana University and University of Montana, respectively, to conduct the systematic reviews and extract the data. BJW leads a method core group at UNC on the instrument quality rating process, where MK served as the rating expert and trainer. SF oversaw data collection and quality management. BJW drafted the introduction. RGM and MK drafted the method section. SF conducted all analyses and drafted the results section. CS and CCL drafted the discussion section of the manuscript. All authors read and approved the final manuscript.

Additional files

Additional file 1:

Search Strings.

Additional file 2:

Evidence Based Assessment Criteria Guidelines.

Additional file 3:

Implementation Outcome Rating Scores. (Tables S5-S12)

Additional file 4:

Construct Head-to-Head Ratings Comparison Graphs. (Figures S12-S19)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lewis, C.C., Fischer, S., Weiner, B.J. et al. Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria. Implementation Sci 10, 155 (2015). https://0-doi-org.brum.beds.ac.uk/10.1186/s13012-015-0342-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13012-015-0342-x

Keywords