Skip to main content

Measures of evidence-informed decision-making competence attributes: a psychometric systematic review

Abstract

Background

The current state of evidence regarding measures that assess evidence-informed decision-making (EIDM) competence attributes (i.e., knowledge, skills, attitudes/beliefs, behaviours) among nurses is unknown. This systematic review provides a narrative synthesis of the psychometric properties and general characteristics of EIDM competence attribute measures in nursing.

Methods

The search strategy included online databases, hand searches, grey literature, and content experts. To align with the Cochrane Handbook of Systematic Reviews, psychometric outcome data (i.e., acceptability, reliability, validity) were extracted in duplicate, while all remaining data (i.e., study and measure characteristics) were extracted by one team member and checked by a second member for accuracy. Acceptability data was defined as measure completion time and overall rate of missing data. The Standards for Educational and Psychological Testing was used as the guiding framework to define reliability, and validity evidence, identified as a unified concept comprised of four validity sources: content, response process, internal structure and relationships to other variables. A narrative synthesis of measure and study characteristics, and psychometric outcomes is presented across measures and settings.

Results

A total of 5883 citations were screened with 103 studies and 35 unique measures included in the review. Measures were used or tested in acute care (n = 31 measures), public health (n = 4 measures), home health (n = 4 measures), and long-term care (n = 1 measure). Half of the measures assessed a single competence attribute (n = 19; 54.3%). Three measures (9%) assessed four competence attributes of knowledge, skills, attitudes/beliefs and behaviours. Regarding acceptability, overall missing data ranged from 1.6–25.6% across 11 measures and completion times ranged from 5 to 25 min (n = 4 measures). Internal consistency reliability was commonly reported (21 measures), with Cronbach’s alphas ranging from 0.45–0.98. Two measures reported four sources of validity evidence, and over half (n = 19; 54%) reported one source of validity evidence.

Conclusions

This review highlights a gap in the testing and use of competence attribute measures related to evidence-informed decision making in community-based and long-term care settings. Further development of measures is needed conceptually and psychometrically, as most measures assess only a single competence attribute, and lack assessment and evidence of reliability and sources of established validity evidence.

Registration

PROSPERO #CRD42018088754.

Peer Review reports

Background

Nurses play an important role in ensuring optimal health outcomes by engaging in evidence-informed decision making (EIDM). EIDM, used synonymously with the term evidence-based practice (EBP) [1] involves “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients” [2] (p. 71). The use of the word ‘informed’ in EIDM denotes that research alone is insufficient for clinical decision making and cannot take precedence over other factors [3]. Evidence in this regard then, is defined as credible knowledge from different sources including research, professional/clinical experience, patient experiences/preferences, and local data and information [4, 5]. There are numerous examples of improved patient outcomes following implementation of best practice guidelines such as reductions in length of hospital stay [6] and adverse patient events related to falls and pressure ulcers in long-term care settings [7].

Despite knowledge of such benefits, competency gaps and low implementation rates in EIDM persist among nurses across diverse practice settings [8,9,10]. A barrier to EIDM implementation has been the lack of clarity and understanding about what nurses should be accountable for with respect to EIDM as well as how it can be best measured [11, 12]. As such, considerable effort has occurred in the development of EIDM competence measures as a strategy to support EIDM implementation in nursing practice [12].

EIDM competence attributes of knowledge, skills, attitudes/beliefs, and behaviours have been well defined in the literature. EIDM knowledge is an understanding of the primary concepts and principles of EIDM and hierarchy of evidence [13,14,15,16,17]. Skills in EIDM refer to the application of knowledge required to complete EIDM tasks (e.g., developing a comprehensive strategy to search for research evidence) [13,14,15,16,17]. Attitudes and beliefs related to EIDM include perceptions, beliefs, and values ascribed to EIDM (e.g., belief that EIDM improves patient outcomes) [13, 15]. EIDM behaviours are defined by the performance of EIDM steps in real-life clinical practice (e.g., identifying a clinical problem to be addressed) [13, 15, 17].

Multiple uses for measures assessing EIDM competence attributes in nursing practice and research exist. Such measures can be integrated into performance appraisals [18] to monitor progressive changes in overall EIDM competence or specific domains. At an organizational level, EIDM competence standards can support human resource management by establishing clear EIDM role expectations for prospective, newly hired, or employed nurses [18, 19]. With respect to nursing research, there has been great attention afforded to the development and testing of different interventions to increase EIDM knowledge, attitudes, skills, and behaviours among nurses [20,21,22]. The use of EIDM competence instruments that produce valid and reliable scores can help to ascertain effective interventions in developing EIDM competence areas.

Previous systematic reviews have focused on EIDM competence attribute measures used among allied health care professionals [13, 16, 23] as well as nurses and midwives [14]. However, several limitations exist among these reviews. A conceptual limitation is that many reviews included research utilization measures despite stating a focus on EIDM [13, 14, 23]. Research utilization, while considered a component of EIDM, is conceptually distinct from it. Research utilization includes the use of scientific research evidence in health care practice [24]. While, EIDM encompasses the application of multiple forms of evidence such as clinical experience, patient preferences, and local context or setting [5]. Conceptual clarity is of critical importance in a psychometric systematic review, as it can impact findings of reported validity evidence. Reviews by Glegg and Holsti [16] and Leung et al. [14] were also limited in focus, as they included measures that assessed only a few, but not all four of the attributes that comprise competence, potentially resulting in the exclusion of existing EIDM measures. Methodologically, across all reviews, psychometric assessment was limited as validity evidence was either not assessed [16] or assessed only by reviewing data that was formally reported as content, construct, or criterion validity [13, 14, 23], neglecting other critical data that could support validity evidence of a measure. As well, none of the reviews reported on or extracted data on specific practice settings. This is an essential component of psychometric assessment, as Streiner et al. [25] identify that reliability and validity are contingent not solely on scale properties, but on the sample with whom and specific situation in which measures are tested. Consideration of setting is important when determining the applicability of a measure for a specific population due to differences in role and environment. Despite these existing reviews, most importantly, none of them focused only on nurses. A systematic review unique to nursing is imperative given the diversity of needs, reception to, and expectations of EIDM across health care professional groups [16]. These differences may be reflected across measures to assess discipline specific EIDM competence.

The current review aimed to address limitations of existing reviews by: including measures that address a holistic conceptualization of EIDM which includes the use of multiple forms of evidence in nursing practice; focusing on the four EIDM competence attributes of knowledge, skills, attitudes and behaviours; utilizing a modern understanding of validity evidence in which sources based on test content, response process, internal structure, and relations to other variables were assessed according to the Standards for Educational and Psychological Testing [26]; extracting data on and presenting findings within the context of practice setting; and targeting the unique population of nurses.

The objectives of this systematic review were to: 1) identify existing measures of EIDM competence attributes of knowledge, skills, attitudes/beliefs, and/or behaviours used among nurses in any healthcare setting; and 2) determine the psychometric properties of test scores for these existing measures.

Methods

The protocol for this systematic review was registered (PROSPERO #CRD42018088754), was published [27] a priori, and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline.

Search strategy

A comprehensive search strategy consisting of online databases, hand searches, grey literature, and content experts, was developed in consultation with a Health Sciences Librarian. Searches were limited from 1990 until December 2017, as the term evidence-based medicine was first introduced and defined in 1990 [28]. Search strategy sources are summarized in Table 1. A detailed search strategy is provided in Additional file 1.

Table 1 Search strategy

Inclusion and exclusion criteria

Studies were included if they met the following criteria: study sample consists of all nurses or a portion of nurses; conducted in any healthcare setting; reported findings from the use or psychometric testing of measures that assesses EIDM knowledge, skills, attitudes/values, and/or behaviours; quantitative or mixed-method design; and English language. Studies were excluded if the sample consisted of solely other healthcare professionals or nursing undergraduate students, or in which data specific to nurses was not reported separately. As well, studies testing or using measures assessing research utilization were excluded [5, 24].

Study selection

Titles and abstracts of initial references and full-text records were screened independently by two team members (EB and TB) for inclusion/exclusion. All disagreements were able to be resolved by consensus between those whom extracted the data.

Data extraction

Data extraction was piloted using a standard form completed independently by two team members (EB and TB) on five randomly selected references. Data extracted pertaining to study and measure characteristics included: study design, sample size, professional designation of sample, healthcare setting, study country, funding, name of measure, format, purpose of measure, item development process, number of items, theoretical framework used, conceptual definition of competence established, EIDM attributes measured, EIDM domains/steps covered, and marking key or scale for self-report measures. Data extraction on these characteristics was performed by one team member (EB) and checked for accuracy by a second team member (TB/TD).

Data extraction of primary outcomes included psychometric outcomes of acceptability, reliability, and validity evidence. Data extracted relating to acceptability consisted of completion time and missing data reported for each measure. Missing data were extracted from reports of incomplete surveys or calculated based on the number of complete surveys included in the analysis. Reliability data extracted for scores of measures related to internal consistency, inter-rater, and test-re-test reliability coefficients. Sources of validity evidence were extracted following guidelines from the Standards for Educational and Psychological Testing [26]. Data were extracted on four sources of validity evidence: test content; response process, internal structure, and relationships to other variables. Test content refers to the relationship between the content of the items and the construct under measure, which includes analyzing the adequacy and relevance of items [26]. Validity evidence of response process involves understanding the thought processes participants use when responding to items and their consistency with the construct of focus [26]. Internal structure is defined as the degree to which test items are related to one another and coincide with the construct for which test scores are being interpreted [26]. The last source of validity evidence, relations to other variables, is the relationship of test scores to other external variables, from which it can be determined the degree to which these relationships align with the construct under measure [26].

To determine if study findings supported validity evidence based on relationships to other variables, a review of the literature was conducted and guiding tables on variable relationships were established (see Additional file 2). Data on psychometric outcomes were extracted by two independent reviewers (EB and TB/TD). All disagreements were able to be resolved by consensus between those whom extracted the data. Measures were grouped according to the number of sources of validity evidence that were reported in the study(ies) associated with each measure. In the event that multiple studies were reported for a measure, group classification was determined based on the number of sources indicated by 50% or more of the associated studies [29].

Quality assessment was not conducted due to limitations across varying and inconsistent criteria for appraising studies involving psychometric measures [27]. Instead, aligning with previous reviews [17, 29], a thorough assessment of reliability and validity evidence for scores of measures was conducted to align with the Standards for Educational and Psychological Testing [26].

Data synthesis

A narrative synthesis of results is presented. Study statistics as they relate to setting and population are summarized. Measures are also categorized according to the number of EIDM attributes addressed. Acceptability defined as completion time and overall missing data are summarized across measures and settings. Reliability data is summarized for each measure across settings. Similar to previous psychometric systematic reviews [17, 29], measures are categorized into distinct groups based on the number of validity evidence sources reported for each measure (e.g., Group 1 = 4 sources of validity evidence). This aligns with the Standards for Psychological and Educational Testing [26] which identifies that the strength of a validity argument for scores on a measure is cumulative and contingent on the number of validity evidence sources established. As psychometric properties are based on the context in which a measure is used or tested, healthcare settings are integrated into the presentation of results.

Results

Review statistics

In total, 5883 references were screened for eligibility at the title and abstract level. Of the 336 screened at full-text, 109 articles were included in the final review. Six pairs of articles (n = 12) were linked (i.e., associated with the same parent study) and the remainder of the articles were unique studies. Therefore, the review included 103 studies (see Additional file 3) and 35 unique measures (see Fig. 1 for PRISMA details).

Fig. 1
figure 1

PRISMA details

Study characteristics

Of the 103 studies, over half were conducted in the United States (n = 57; 55.3%). Twenty studies were conducted in Europe (57.1%), with 19 (54.3%) taking place in Asia. Two studies were conducted each in Africa, Australia, Canada, and one in New Zealand. Publication years spanned 2004–2017. One additional measure was identified after contacting content experts; its associated study was published in 2018.

Settings

The 35 included measures were used or tested most often in acute care (n = 31 measures) followed by primary care (n = 9 measures). Measures were used less often in public health (n = 4 measures), home health (n = 4 measures), and long-term care (n = 1 measure). An overview of measures with identified settings is presented in Table 2.

Table 2 Description of EIDM competence attributes measures across setting, population (35 measures)

Population

Measures were primarily used or tested among registered nurses (n = 26 measures; 74.3%), followed by advanced practice nurses (n = 7 measures; 20%), and licensed/registered practical nurses (n = 4 measures; 11.4%). A licensure group for 13 of the measures (37.1%) was not specified. Associated population groups are presented for each measure in Table 2.

EIDM competence attributes addressed

Measures addressed a variety of EIDM competence attributes (see Table 2). Only three measures (8.6%) assessed all four EIDM competence attributes of knowledge, skills, attitudes/beliefs, and behaviours. These included the Evidence-Based Practice Questionnaire (EBPQ) [30], the School Nursing Evidence-based Practice Questionnaire [67] and a self-developed measure by Chiu et al. [68]. Seven measures (20%) assessed three of the four EIDM competence attributes, with differing foci [69,70,71,72,73,74,75]. These measures all assessed knowledge, but varied on assessment of attitudes/beliefs, skills, and behaviours. Six measures (17%) addressed two EIDM competence attributes [77, 78, 80,81,82,83]. Over half of the total measures (n = 19; 54.3%) assessed only a single EIDM attribute. Among these single attribute measures, attitudes/beliefs were assessed the most (n = 6 measures) [31,32,33, 84, 134,135,136,137]. Overall, knowledge was the attribute addressed by most measures (n = 19), followed closely by attitudes/beliefs (n = 17 measures), skills (n = 15 measures), and behaviours (n = 13 measures; see Table 2).

Psychometric outcomes

Acceptability

Missing data

Overall, missing data related to percentage of incomplete surveys were reported for 10 measures (28.6%). The range of missing data was 1.6% (EBP Beliefs Scale) - 25.6% (EBPQ) and differed across health care settings. Missing data across seven measures yielded percentages below excessive missing data limits of > 10% [138]. Reported missing data is summarized in Table 3.

Table 3 Acceptability findings: Missing data and completion time [related citations]

Completion time

Data for completion time were extracted where times were explicitly stated or calculated using time to complete each item if a combined time was reported to complete multiple measures in a study. Completion time was reported for four measures, ranging from 5 (EBP Beliefs Scale) - 25 (EBPQ) minutes [34, 82, 84, 85]. A summary of reported completion time is provided in Table 3.

Reliability

Across measures and studies reporting reliability evidence, internal consistency was the most commonly assessed. Inter-rater and test-re-test reliability were also reported, although, for only one measure each.

Internal consistency

Reliability of scores, reported as Cronbach’s alpha (α), was reported for 21 measures (60%). Cronbach’s alpha values ranged widely across settings of: Acute care (0.45–0.99); primary care (0.57–0.98); public health (0.79–0.91); home health (0.63–0.87); and long-term care (0.79–0.96). Cronbach’s alphas are presented for individual measures and settings in Table 4.

Table 4 Reported Cronbach’s alphas for measures (n = 21) across settings [related citations]

Out of the 21 measures for which internal consistency was reported, seven measures had multiple study findings reported across unique practice settings. Reported Cronbach’s alphas were varied across and within settings for the same measure as evident by wide alpha ranges (see Table 4). Among these findings, two measures assessing EIDM attitudes with the lowest reported alphas were the Evidence-based Nursing Attitude Questionnaire (0.45) and the EBPQ (0.63 for attitude subscale) in acute care settings. The Modified Evidence-based Nursing Education Questionnaire also had a low alpha reported (0.57) in both acute and primary care settings. Regarding high range values, the EBPQ had the highest overall reported alpha (0.99) also in an acute care setting.

All 21 measures met a minimum of Cronbach’s alpha ≥0.80 [139] in at least one study instance (see Table 4).

Inter-rater and test-retest reliability

Test-retest reliability was assessed in only one measure, the Quick EBP Values, Implementation, Knowledge Survey [75]. Average item level test-retest coefficients ranged from below marginal to acceptable [140] at 0.51–0.70 [75].

Inter-rater reliability was reported for scores on the Knowledge and Skills in Evidence-Based Nursing measure [82]. Intraclass correlations were reported for three sections of this measure and exceeded a guideline of ≥0.80 [140].

Sources of validity evidence

Group 1: measures reporting four sources of validity evidence

Two of the 35 measures (5.7%) used/tested across three studies, were assigned to Group 1 [67, 135, 136] (see Table 5). Common across these two measures was the use of exploratory factor analysis to assess internal structure. Pertaining to validity based on relationships with other variables, this differed between the two measures. For the School Nursing Evidence Based Practice Questionnaire, the use of correlation and regression analyses supported validity evidence with significant associations between use of EBP and demographic variables (e.g., education; see Additional file 4). For the Evidence-Based Nursing Attitude Questionnaire, correlation and t-test analyses were used to establish relationships between EBP attitudes and variables related to EBP knowledge, EBP training, and education level (see Additional file 4). The measures also varied with respect to setting with the former being tested in a public health setting and the latter in acute care, primary care, and home healthcare settings.

Table 5 Group 1: Measures with four sources of validity evidence (n = 2)

Group 2: measures with three sources of validity evidence

Five measures (14%) used/tested across seven studies, were categorized in group 2 [35, 71, 75, 76, 79, 82, 137] (see Table 6). Common across all these measures was the report of validity evidence related to content and relationships to other variables. Similar to group 1, the strength of variable relationships differed, with varied use of correlational, t-test, ANOVA, and regression analyses to report significant relationships between EBP competence attributes (i.e., knowledge, implementation, skills, attitudes) and demographic, organizational variables or education interventions (see Additional file 4). Internal structure validity evidence via exploratory factor analysis was reported for three measures [71, 75, 76, 137], while response process validity evidence was reported for two measures [35, 82]. All measures were tested or used in acute care.

Table 6 Group 2: Measures with three sources of validity evidence (n = 5)

Group 3: measures with two sources of validity evidence

Six measures (17%) were categorized in group 3 [10, 69, 70, 73, 80, 120] (see Table 7). Content validity evidence was commonly reported across all six measures using an expert group. Validity evidence based on relationships to other variables was reported for five of the six measures with correlational and ANOVA analyses used most often (n = 3 measures). Once again, regarding this source of validity evidence, significant relationships were demonstrated between EBP knowledge, attitudes, skills, and individual characteristics or organizational factors (see Additional file 4). Acute care was the most common healthcare setting (n = 5 measures).

Table 7 Group 3: Measure with two sources of validity evidence (n = 6)

Group 4: measures with one source of validity evidence

Over half of the measures were categorized in group 4 (n = 19; 54%; see Table 8). For all these measures, except one [122], validity evidence based on relationships to other variables was reported. With respect to strength of these variable relationships, t-test (n = 12 measures), correlational (n = 11 measures), and ANOVA (n = 8 measures) analyses were primarily conducted. Regression analyses were used less commonly (n = 6 measures). Similarly, as in previous groups, significant relationships between EIDM competence attributes and demographic, organizational factors, and interventions were established (see Additional file 4).

Table 8 Group 4: Measures with one source of validity evidence (n = 19)

Group 5: measures with no sources of validity evidence

No sources of validity evidence were found for three measures [68, 72, 121].

See Additional file 4 for detailed information on validity evidence sources for each measure with supporting evidence.

Validity evidence and settings

Most of the measures (n = 29; 83%) reported validity evidence in the context of acute care settings. For nine measures, validity evidence was reported across multiple settings. For three of these measures (EBP Implementation Scale, EBP-Beliefs Scale, EBPQ), multiple sources of validity (> 1) were more often reported in acute care settings compared to other practice settings where only one source of validity evidence was commonly found. In contrast, one measure (Evidence-based Nursing Attitude Questionnaire) had four sources of validity evidence established in primary and home care settings but not in acute care. While, the same number of validity sources were established for five additional measures (Developing Evidence-based Practice Questionnaire, modified Evidence-based Nursing Education Questionnaire, two unnamed self-developed measures, EBP Competency Tool) across varied healthcare settings.

Discussion

This review furthers our understanding about measures assessing EIDM competence attributes in nursing practice. Findings highlight limitations in the existing literature with respect to use or testing of measures across practice settings, the diversity in EIDM competence attributes addressed, and variability in the process and outcomes of psychometric assessment of existing measures.

Settings

This review contributes new insight about settings in which EIDM measures have been used or tested that previous systematic reviews have not addressed. This review reveals a concentration on use or testing of EIDM measures in acute care (n = 31 measures; 89%) compared to other healthcare contexts (primary care, home health, public health, long-term care). This imbalance was also observed in an integrative review of 37 studies exploring the knowledge, skills, attitudes and capabilities of nurses in EIDM [9] where the majority of studies (n = 27) were conducted in hospitals, with fewer conducted in primary, community, and home healthcare, and none in long-term care. While there is a large body of evidence to support understanding of the psychometric rigor of EIDM measures in acute care, more attention and investment is required for this type of understanding in community-based and long-term care contexts. Given current trends and priorities in healthcare such as the reorientation toward home care [141], attention toward disease prevention and management, and health promotion [142], and a large aging population with growing projections of residence in long-term care facilities [143], it is of great importance to assess EIDM competence across all nursing practice settings to ensure efficient, safe, and patient-centred care.

EIDM competence attributes addressed

This review also adds to the current literature on nursing EIDM competence measures using a broader conceptualization of competence. That is, the measures reviewed focus on four competence attributes of knowledge, skills, attitudes/beliefs, and behaviours. In comparison, Leung et al. [14] assess measures focused on three attributes; knowledge, attitudes and skills. In our current review, three measures [30, 67, 68] addressed all four EIDM attributes (e.g., knowledge, skills, attitudes/beliefs, behaviours). Measures that address all four attributes are of critical importance given the inextricable link between knowledge, skills, attitudes and behaviours to comprise professional competence [144,145,146]. Professional competence cannot sufficiently develop if each attribute was to support it independently [147]. Knowledge without skill, or the ability to use knowledge, renders knowledge useless [148]. Similarly, performing a skill without understanding the reasoning behind it contributes to unsafe and incompetent practice [148, 149]. And lastly, possessing knowledge and skill without the experience of their application in the real world is insufficient to qualify as competent [150].

However, despite these measures addressing all four competence attributes, based.

on their response scales used, they do not conceptually reflect an assessment of competence, defined as quality of ability or performance to an expected standard [150], but rather, focus on mere completion or frequency of completing tasks. Quality versus frequency of behaviours are distinct concepts and have been measured separately in nursing performance studies [19, 151]. The provision of a high standard of patient care includes nursing competence assessment, which is a critical component of quality improvement processes, workforce development and management [19, 152]. This conceptual limitation of existing EIDM measures highlights a need for a measure that aligns with the conceptual understanding of competence as an interrelation between knowledge, skills, attitudes/beliefs, behaviours [144] and quality of ability [150].

Psychometric outcomes

Acceptability

Despite acceptability, measured as amount of missing data and completion times, being identified as a critical aspect of psychometric assessment [153], discussion of acceptability among included primary studies was lacking compared to an emphasis on reliability or validity. In this review, only 10 measures (28.6%) reported missing data. In addition, only four measures (11%) reported completion times. This limited discussion of acceptability is reinforced by findings from a systematic review of research utilization measures by Squires et al. [29] in which no studies reported acceptability data. As well, acceptability was not mentioned or discussed in systematic reviews of EIDM measures for nurses, midwives [14], medical practitioners [17] and allied health professionals [23]. Discussions about acceptability have typically been explored in the context of patient-reported outcome measures [153]. These discussions also hold relevance for measures with healthcare professionals as end users [154, 155]. Time and ease of completing a measure are important considerations for nurses or managers who work in fast-paced clinical settings, which can influence their decision to integrate these measures into their practice.

Reliability

Findings from the current review determine gaps in reliability testing of measures in addition to variable findings across EIDM measures and healthcare contexts.

Internal consistency reported as Cronbach’s alpha was the most commonly assessed type of reliability in this review. This appears to be a trend similarly found among EIDM related psychometric reviews [14, 23]. Cronbach’s alpha is a commonly used statistic in psychometric research perhaps due to its ease of calculation as it can be computed with a one-time administration [156]. While Nunnally [157] identifies that the “coefficient alpha provides a good estimate of reliability in most cases” (p. 211), there are important considerations with its use. One consideration is that interpretation of Cronbach’s alpha requires an understanding that it must be re-evaluated in each new setting or population a measure is used in [158]. In the current review, many of the studies associated with frequently used measures (EBP-Implementation Scale, EBP Beliefs Scale) did not re-evaluate internal consistency when using the measure in a new or different setting from where it was originally tested. This was evident from unreported data in multiple studies associated with the same measure but taking place across various healthcare settings. Other reviews have reported similar findings, whereby measures have not been re-assessed in new contexts, and have reported either no data or only original internal consistency findings [13, 16]. The importance of re-assessing and interpreting this reliability statistic in new contexts is further underscored by current review findings in which Cronbach’s alphas varied widely across unique practice settings for the same measure.

Moreover, there were heterogenous findings among studies taking place in the same type of setting for the same measure. Within each setting, there were instances in which the same measure would result in varying Cronbach’s alphas with range values falling both below and above minimum guidelines of ≥0.80 [139]. For example, Mooney [86] reported a Cronbach’s alpha of 0.776 for the EBP Beliefs Scale when used in an acute care setting, while Underhill et al. [87] reported α = 0.95 with the same measure also used in acute care practice. Variability in internal consistency findings has been reported in other systematic reviews as well [16, 23], perhaps due to the use of measures in diverse populations, settings, and countries. This further indicates the effect of nuanced populations within similar practice settings on internal consistency findings.

In addition, lower alphas were typically reported for EIDM attitude scales, such as for the self-developed measure by Yip et al. [71] (α = 0.69), the EBNAQ [135, 136] (α = 0.45) and the EBPQ (α = 0.63) [30]. A possible explanation of these low alphas may be related to the low number of items on an EIDM attitude subscale compared to other EIDM competence attributes. As Streiner [25] indicates, the length of a scale highly impacts internal consistency, and as such, reliability could plausibly be improved through the addition of conceptually robust items. Further to this, in a literature review of the uses of the EBPQ [159], authors note that low alpha scores for the attitude subscale were consistently reported, due to repeated item deletions or modifications, calling for further refinement of EIDM attitudes items.

Overall, there was a lack of reliability assessment as 40% of measures did not report reliability. This occurred for both newly developed and established measures. The lack of reliability testing has also been identified in existing reviews assessing EIDM measures among allied healthcare professionals [13, 16, 23] as early as 2010. The ongoing lack of attention to reliability assessment highlights a need for more rigorous and standardized reliability testing not only in the original development of measures but also in its subsequent use in different healthcare environments.

Validity

Findings pertaining to validity evidence when compared to existing literature show both alignment and contrast with respect to how validity evidence was assessed, and the number and type of validity sources established across measures.

As noted, psychometric assessment of the current review was based on the contemporary understanding that the strength of a validity argument is dependent on the accumulation of different validity evidence sources [26]. In this review, only one source of validity evidence was reported for over half of the measures (n = 19; 54%). Very few measures were reported with four (n = 2 measures) or three (n = 5 measures) validity evidence sources established. Employing a similar approach to validity evidence assessment, Squires et al. [29] reported similar findings in their review of research utilization measures: the majority of measures were categorized under level three of their hierarchy (i.e., one source of validity evidence); no measures were reported as having all four sources of validity evidence; and six measures were associated with three sources of validity evidence.

Since existing reviews did not present validity evidence in the context of practice settings, this presents challenges with comparison of results. However, this review presents some insight on contextualizing validity evidence. In the current review, much of the validity evidence was presented in the context of an acute care setting, and in particular, for three measures most widely used (EBP Implementation Scale, EBP Beliefs Scale, EBPQ), more sources of validity evidence were established by the original developers in acute care practice. Similar to reliability findings, this brings to light a critical gap in nursing research with respect to the use of measures after their original development, and lack of validity evidence assessment in different settings and populations. This demonstrates a call to action for nursing researchers that a consistent level of rigor must be applied to comprehensively re-assess sources of validity evidence for a measure when using it in a new practice setting. This strengthens a cumulative body of validity evidence to support continued use of a measure in varied nursing contexts.

Compared to the current review, previous EIDM psychometric systematic reviews [13, 14, 16] included traditional assessments of content, criterion, and construct validity and demonstrated variable findings. Buchanan et al. [13] reported no findings related to validity for 18 measures and failure to re-test validity by authors when original measures were used in a new study setting. Glegg and Holsti [16] only provided a description of validity data and did not perform an assessment through scoring or ranking of this evidence. While, Leung et al. [14] used their self-developed Psychometric Grading Framework [160] to assess validity of instruments in their review. These authors determined that most of the studies reported measures as having ‘weak’ or ‘very weak’ validity according to their matrix scoring, with only three studies reporting the tested measures as having adequate validity [14].

Included studies in this review also limited validity assessment to sources based on test content and relationships to other variables, focusing on construct validity. This appears to be a consistent theme reported across existing reviews as well [14, 23]. A new contribution from this review is an in-depth understanding about the strength of validity evidence based on relationships to other variables. Data extracted on the statistical analyses associated with this source of validity evidence showed relationships established primarily through correlational, t-test or ANOVA analyses. In less instances, regression analyses were used to demonstrate strong relationships, highlighting a need in psychometric evaluation of tools to validate more robust relationships between variables.

Findings from the current review and existing literature highlight limitations in assessing validity evidence and the psychometric rigor of existing EIDM measures. Variability in testing and results of validity evidence creates challenges and confusion for end users in research or nursing practice who look to this body of literature to determine appropriate and robust EIDM measures. Scholarly support for the use of a comprehensive and contemporary approach in psychometric development of tools can help to standardize assessments and produce findings representative of a unified understanding of validity evidence.

Considerations for tool selection in nursing practice or research

This systematic review can serve as a helpful resource for nursing administrators, frontline staff, or researchers who are interested in using a measure to assess a specific EIDM competence attribute. In selecting measures for nursing practice or research, the specific population and setting in which measures have been previously used or tested, in addition to specific EIDM competence attributes they address, all serve as important considerations. As well, looking to the acceptability of measures, taking into account tool completion time given demands of busy clinical environments and if high rates of missing data > 10% are present [138], are also critical factors to consider for decision-making. Acceptable reliability of a measure should also be given weight in tool selection (α ≥ 0.80) [139], in addition to determining how comprehensively all four sources of validity evidence (content, internal structure, response process, relationships to other variables) have been established for a given measure [26].

Limitations

A limitation of this review relates to the absence of quality assessments of included primary studies. Given that traditional quality assessment was not conducted, this may influence the confidence in study findings and thus results are to be interpreted with caution. However, among tools previously used to assess quality of psychometric studies, several limitations exist [27]. These include the development of quality assessment tools for use only with patient reported outcome measures [14], using a lowest score ranking method providing an imbalance in the overall quality score [161], and a lack of validity and reliability testing [27]. Most importantly, existing quality assessment tools employ a traditional approach of assessing construct, content, and criterion validity, rather than a contemporary perspective of viewing validity evidence as a unified concept [26], as used to guide the current review. Given this, to align with other reviews using a similar contemporary approach [17, 29] assessment was focused on the categorization of measures according to the number of sources of validity evidence established for scores in related studies. A second limitation pertains to the exclusion of non-English literature as there were 14 articles identified from full-text screening requiring translation for seven languages, which were excluded from the review. Given the large number of studies included in the final review, it is unlikely that the small number of non-English studies would have a critical impact on results. A third limitation is that with the use of a classification system for assessing validity evidence, the number of studies for a particular measure could influence the strength of the validity argument [29]. A measure which has one or a small number of studies may appear to have strong validity evidence [29] as compared to those measures with more cited studies. Implications of this are most relevant for more established measures, in that more sources of validity evidence may have in fact been established, but only in a small amount of studies, which may not be reflected in its final categorization. However, the advantage of using this synthesis process is that it highlights the types of validity evidence that require further testing for a particular measure [29].

Conclusions

There is a diverse collection of measures that assess EIDM competence attributes of knowledge, skills, attitudes/beliefs, and/or behaviours in nurses. Among these measures is a concentration on the assessment of single EIDM competence attributes. Review findings determined that three measures addressed all four EIDM attributes, although with some conceptual limitations, highlighting a need for a tool that comprehensively assesses EIDM competence. More rigorous and consistent psychometric testing is also needed for EIDM measures overall, but particularly in community-based and long-term care settings in which the data is limited. A contemporary approach to psychometric assessment of EIDM measures in the future may also provide more robust and comprehensive evidence of their psychometric rigor.

Availability of data and materials

The data included in this review was retrieved from published studies and also through supplementary documents provided by study authors upon request as necessary.

Abbreviations

EIDM:

Evidence-informed decision-making

EBP:

Evidence-based practice

EBPQ:

Evidence-based practice questionnaire

References

  1. Canadian Foundation for Healthcare Improvement. 2017 [Available from: http://www.cfhi-fcass.ca/WhatWeDo/a-z-topics/evidence-informed-decision-making.

  2. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn't. BMJ. 1996;312(7023):71–2.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Culyer AJ, Lomas J. Deliberative processes and evidence-informed decision-making in health care: do they work and how might we know? Evidence Policy. 2006;2(3):357–71.

    Article  Google Scholar 

  4. Rycroft-Malone J. The PARIHS framework--a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19(4):297–304.

    Article  PubMed  Google Scholar 

  5. Rycroft-Malone J, Seers K, Titchen A, Harvey G, Kitson A, McCormack B. What counts as evidence in evidence-based practice? J Adv Nurs. 2004;47(1):81–90.

    Article  PubMed  Google Scholar 

  6. Diehl H, Graverholt B, Espehaug B, Lund H. Implementing guidelines in nursing homes: a systematic review. BMC Health Serv Res. 2016;16(298):12.

    Google Scholar 

  7. Wu Y, Brettle A, Zhou C, Ou J, Wang Y, Wang S. Do educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes? A systematic review. Nurse Educ Today. 2018;70:109–14.

    Article  PubMed  Google Scholar 

  8. Gerrish K, Cooke J. Factors influencing evidence-based practice among community nurses. J Community Nurs. 2013;27(4):98–101.

    Google Scholar 

  9. Saunders H, Vehviläinen-Julkunen K. The state of readiness for evidence-based practice among nurses: an integrative review. Int J Nurs Stud. 2016;56:128–40.

    Article  PubMed  Google Scholar 

  10. Melnyk BM, Gallagher-Ford L, Zellefrow C, Tucker S, Thomas B, Sinnott LT, et al. The first U.S. study on nurses’ evidence-based practice competencies indicates major deficits that threaten healthcare quality, safety, and patient outcomes. Worldviews Evid Based Nurs. 2018;15(1):16–25.

    Article  PubMed  Google Scholar 

  11. Melnyk BM, Gallagher-Ford L, Long LE, Fineout-Overholt E. The establishment of evidence-based practice competencies for practicing registered nurses and advanced practice nurses in real-world clinical settings: proficiencies to improve healthcare quality, reliability, patient outcomes, and costs. Worldviews Evid Based Nurs. 2014;11(1):5–15.

    Article  PubMed  Google Scholar 

  12. Saunders H, Vehviläinen-Julkunen K. Key considerations for selecting instruments when evaluating healthcare professionals’ evidence-based practice competencies: a discussion paper在评估医疗保健专业人员的循证实践能力时选择工具的关键考虑因素:讨论文件. J Adv Nurs. 2018;74(10):2301–11.

    Article  PubMed  Google Scholar 

  13. Buchanan H, Siegfried N, Jelsma J. Survey instruments for knowledge, skills, attitudes and behaviour related to evidence-based practice in occupational therapy: a systematic review. Occup Ther Int. 2016;23(2):59–90.

    Article  PubMed  Google Scholar 

  14. Leung K, Trevena L, Waters D. Systematic review of instruments for measuring nurses' knowledge, skills and attitudes for evidence-based practice. J Adv Nurs. 2014;70(10):2181–95.

    Article  PubMed  Google Scholar 

  15. Tilson JK, Kaplan SL, Harris JL, Hutchinson A, Ilic D, Niederman R, et al. Sicily statement on classification and development of evidence-based practice learning assessment tools. BMC Med Educ. 2011;11:78.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Glegg SMN, Holsti L. Measures of knowledge and skills for evidence-based practice: a systematic review. Can J Occup Ther. 2010;77(4):219–32.

    Article  PubMed  Google Scholar 

  17. Shaneyfelt T, Baum KD, Bell D, Feldstein D, Houston TK, Kaatz S, et al. Instruments for evaluating education in evidence-based practice: a systematic review. JAMA. 2006;296(9):1116–27.

    Article  CAS  PubMed  Google Scholar 

  18. Melnyk BM. Breaking down silos and making use of the evidence-based practice competencies in healthcare and academic programs: an urgent call to action. Worldviews Evid Based Nurs. 2018;15(1):3–4.

    Article  PubMed  Google Scholar 

  19. Meretoja R, Isoaho H, Leino-Kilpi H. Nurse competence scale: development and psychometric testing. J Adv Nurs. 2004;47(2):124–33.

    Article  PubMed  Google Scholar 

  20. Haggman-Laitila A, Mattila L-R, Melender H-L. Educational interventions on evidence-based nursing in clinical practice: a systematic review with qualitative analysis. Nurse Educ Today. 2016;43:50–9.

    Article  PubMed  Google Scholar 

  21. Hines S, Ramsbotham J, Coyer F. The effectiveness of interventions for improving the research literacy of nurses: a systematic review. Worldviews Evid Based Nurs. 2015;12(5):265–72.

    Article  PubMed  Google Scholar 

  22. Middlebrooks R Jr, Carter-Templeton H, Mund AR. Effect of evidence-based practice programs on individual barriers of workforce nurses: an integrative review. J Contin Educ Nurs. 2016;47(9):398–406.

    Article  PubMed  Google Scholar 

  23. Fernandez-Dominguez JC, Sese-Abad A, Morales-Asencio JM, Oliva-Pascual-Vaca A, Salinas-Bueno I, de Pedro-Gomez JE. Validity and reliability of instruments aimed at measuring evidence-based practice in physical therapy: a systematic review of the literature. J Eval Clin Pract. 2014;20(6):767–78.

    Article  PubMed  Google Scholar 

  24. Estabrooks CA. Will evidence-based nursing practice make practice perfect? Can J Nurs Res. 1998;30(4):273–94.

    Google Scholar 

  25. Streiner D, Norman G, Cairney J. Health measurement scales: a practical guide to their development and use. 5th ed. Oxford: Oxford University Press; 2015.

    Book  Google Scholar 

  26. American Educational Research Association. American Psychological Association, National Council on measurement in education. The standards for educational and psychological testing. Washington, D.C: American Educational Research Association; 2014.

    Google Scholar 

  27. Belita E, Yost J, Squires JE, Ganann R, Burnett T, Dobbins M. Measures assessing attributes of evidence-informed decision-making (EIDM) competence among nurses: a systematic review protocol. Syst Rev. 2018;7(181):8.

    Google Scholar 

  28. Guyatt GH. Evidence-based medicine. Ann Intern Med. 1991;114(SUPPL. 2):A-16.

    Google Scholar 

  29. Squires JE, Estabrooks CA, O'Rourke HM, Gustavsson P, Newburn-Cook CV, Wallin L. A systematic review of the psychometric properties of self-report research utilization measures used in healthcare. Implement Sci. 2011;6:83.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Upton D, Upton P. Development of an evidence-based practice questionnaire for nurses. J Adv Nurs. 2006;53(4):454–8.

    Article  PubMed  Google Scholar 

  31. Duffy JR, Culp S, Yarberry C, Stroupe L, Sand-Jecklin K, Sparks CA. Nurses' research capacity and use of evidence in acute care: baseline findings from a partnership study. J Nurs Adm. 2015;45(3):158–64.

    Article  PubMed  Google Scholar 

  32. Duffy JR, Culp S, Sand-Jecklin K, Stroupe L, Lucke-Wold N. Nurses' research capacity, use of evidence, and research productivity in acute care: year 1 findings from a partnership study. J Nurs Adm. 2016;46(1):12–7.

    Article  PubMed  Google Scholar 

  33. Linton MJ, Prasun MA. Evidence-based practice: collaboration between education and nursing management. J Nurs Manag. 2013;21(1):5–16.

    Article  PubMed  Google Scholar 

  34. Fehr ST. Examining the relationship between nursing informatics competency and evidence-based practice competency among acute care nurses: George Mason University; 2014.

    Google Scholar 

  35. Adamu A, Naidoo JR. Exploring the perceptions of registered nurses towards evidence-based practice in a selected general hospital in Nigeria. Afr J Nurs Midwifery. 2015;17(1):33–46.

    Google Scholar 

  36. AbuRuz ME, Hayeah HA, Al-Dweik G, Al-Akash H. Knowledge, attitudes, and practice about evidence-based practice: a Jordanian study. Health Sci J. 2017;11(2):1–8.

    Google Scholar 

  37. Agnew D. A survey of nurses' knowledge, attitude and skills with evidence-based practice in the practice setting. Nurs Res. 2016;65(2):E100-E.

    Google Scholar 

  38. Allen N, Lubejko BG, Thompson J, Turner BS. Evaluation of a web course to increase evidence-based practice knowledge among nurses. Clin J Oncol Nurs. 2015;19(5):623–7.

    Article  PubMed  Google Scholar 

  39. Ammouri AA, Raddaha AA, Dsouza P, Geethakrishnan R, Noronha JA, Obeidat AA, et al. Evidence-based practice: knowledge, attitudes, practice and perceived barriers among nurses in Oman. Sultan Qaboos Univ Med J. 2014;14(4):e537–45.

    PubMed  PubMed Central  Google Scholar 

  40. Brown CE, Ecoff L, Kim SC, Wickline MA, Rose B, Klimpel K, et al. Multi-institutional study of barriers to research utilisation and evidence-based practice among hospital nurses. J Clin Nurs. 2010;19(13–14):1944–51.

    PubMed  Google Scholar 

  41. Brown CE, Wickline MA, Ecoff L, Glaser D. Nursing practice, knowledge, attitudes and perceived barriers to evidence-based practice at an academic medical center. J Adv Nurs. 2009;65(2):371–81.

    Article  PubMed  Google Scholar 

  42. Carlone JB, Igbirieh O. Measuring attitudes and knowledge of evidence-based practice in the qatar nursing workforce: a quantitative cross-sectional analysis of barriers to empowerment. Avicenna. 2014;2014 (1) (no pagination)(5).

  43. Duff J, Butler M, Davies M, Williams R, Carlile J. Perioperative nurses' knowledge, practice, attitude, and perceived barriers to evidence use: a multisite, cross-sectional survey. ACORN J Perioperative Nurs Australia. 2014;27(4):28–35.

    Google Scholar 

  44. Gonzalez-Torrente S, Pericas-Beltran J, Bennasar-Veny M, Adrover-Barcelo R, Morales-Asencio J, De Pedro-Gomez J. Perception of evidence-based practice and the professional environment of primary health care nurses in the Spanish context: a cross-sectional study. BMC Health Serv Res. 2012;12:227.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Hagedorn Wonder A, McNelis AM, Spurlock DJ, Ironside PM, Lancaster S, Davis CR, et al. Comparison of Nurses' self-reported and objectively measured evidence-based practice knowledge. J Contin Educ Nurs. 2017;48(2):65–70.

    Article  PubMed  Google Scholar 

  46. Hasheesh MOA, AbuRuz ME. Knowledge, attitude and practice of nurses towards evidence-based practice at Al-Medina. KSA Jordan Med J. 2017;51(2):47–56.

    Google Scholar 

  47. Hwang JI, Park HA. Relationships between evidence-based practice, quality improvement and clinical error experience of nurses in Korean hospitals. J Nurs Manag. 2015;23(5):651–60.

    Article  PubMed  Google Scholar 

  48. Kim SC, Brown CE, Ecoff L, Davidson JE, Gallo A-M, Klimpel K, et al. Regional evidence-based practice fellowship program: impact on evidence-based practice implementation and barriers. Clin Nurs Res. 2013;22(1):51–69.

    Article  PubMed  Google Scholar 

  49. Koehn ML, Lehman K. Nurses' perceptions of evidence-based nursing practice. J Adv Nurs. 2008;62(2):209–15.

    Article  PubMed  Google Scholar 

  50. Lovelace R, Noonen M, Bena JF, Tang AS, Angie M, Cwynar R, et al. Value of, attitudes toward, and implementation of evidence-based practices based on use of self-study learning modules. J Contin Educ Nurs. 2017;48(5):209–16.

    Article  PubMed  Google Scholar 

  51. Moore L. Effectiveness of an online educational module in improving evidence-based practice skills of practicing registered nurses. Worldviews Evid Based Nurs. 2017;14(5):358–66.

    Article  PubMed  Google Scholar 

  52. Pereira RP, Guerra AC, Cardoso MJ, dos Santos AT, de FM, Carneiro AC. Validation of the Portuguese version of the evidence-based practice questionnaire. Rev Lat Am Enfermagem 2015;23(2):345–351.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Perez-Campos M, Sanchez-Garcia I, Pancorbo-Hidalgo P. Knowledge, attitude and use of evidence-based practice among nurses active on the internet. Investig Educ Enferm. 2014;32(3):451–60.

    Article  Google Scholar 

  54. Phillips C. Relationships between duration of practice, educational level, and perception of barriers to implement evidence-based practice among critical care nurses. Int J Evid Based Healthc. 2015;13(4):224–32.

    Article  PubMed  Google Scholar 

  55. Prior P, Wilkinson J, Neville S. Practice nurse use of evidence in clinical practice: a descriptive survey. Nurs Prax NZ. 2010;26(2):14–25.

    Google Scholar 

  56. Ramos-Morcillo A, Fernandez-Salazar S, Ruzafa-Martinez M, Del-Pino-Casado R. Effectiveness of a brief, basic evidence-based practice course for clinical nurses. Worldviews Evid Based Nurs. 2015;12(4):199–207.

    Article  PubMed  Google Scholar 

  57. Sese-Abad A, De Pedro-Gomez J, Bennasar-Veny M, Sastre P, Fernandez-Dominguez J, Morales-Asencio J. A multisample model validation of the evidence-based practice questionnaire. Res Nurs Health. 2014;37(5):437–46.

    Article  PubMed  Google Scholar 

  58. Shafiei E, Baratimarnani A, Goharinezhad S, Kalhor R, Azmal M. Nurses' perceptions of evidence-based practice: a quantitative study at a teaching hospital in Iran. Med J Islam Repub Iran. 2014;28:135.

    PubMed  PubMed Central  Google Scholar 

  59. Sim JY, Jang KS, Kim NY. Effects of education programs on evidence-based practice implementation for clinical nurses. J Contin Educ Nurs. 2016;47(8):363–71.

    Article  PubMed  Google Scholar 

  60. Son YJ, Song Y, Park SY, Kim JI. A psychometric evaluation of the Korean version of the evidence-based practice questionnaire for nurses. Contemp Nurse. 2014;49:4–14.

    Article  PubMed  Google Scholar 

  61. Stavor DC, Zedreck-Gonzalez J, Hoffmann RL. Improving the use of evidence-based practice and research utilization through the identification of barriers to implementation in a critical access hospital. J Nurs Adm. 2017;47(1):56–61.

    Article  PubMed  Google Scholar 

  62. Toole BM, Stichler JF, Ecoff L, Kath L. Promoting nurses' knowledge in evidence-based practice: do educational methods matter? J Nurses Prof Dev. 2013;29(4):173–81.

    Article  PubMed  Google Scholar 

  63. Wan LPA. Educational intervention effects on nurses' perceived ability to implement evidence-based practice. Ann Arbor: University of Phoenix; 2017.

    Google Scholar 

  64. White-Williams C, Patrician P, Fazeli P, Degges MA, Graham S, Andison M, et al. Use, knowledge, and attitudes toward evidence-based practice among nursing staff. J Contin Educ Nurs. 2013;44(6):246–54 quiz 55-6.

    Article  PubMed  Google Scholar 

  65. Williamson KM, Almaskari M, Lester Z, Maguire D. Utilization of evidence-based practice knowledge, attitude, and skill of clinical nurses in the planning of professional development programming. J Nurses Prof Dev. 2015;31(2):73–80.

    Article  PubMed  Google Scholar 

  66. Xie HT, Zhou ZY, Xu CQ, Ong S, Govindasamy A. Nurses' attitudes towards research and evidence-based practice. Ann Acad Med Singapore. 2015;44:S240.

    Google Scholar 

  67. Adams SL. Understanding the variables that influence translation of evidence-based practice into school nursing: University of Iowa; 2007.

    Google Scholar 

  68. Chiu YW, Weng YH, Lo HL, Shih YH, Hsu CC, Kuo KN. Impact of a nationwide outreach program on the diffusion of evidence-based practice in Taiwan. International J Qual Health Care. 2010;22(5):430–6.

    Article  Google Scholar 

  69. Bissett KM, Cvach M, White KM. Improving competence and confidence with evidence-based practice among nurses: outcomes of a quality improvement project. J Nurses Prof Dev. 2016;32(5):248–55.

    Article  PubMed  Google Scholar 

  70. Seyyedrasooli A, Zamanzadeh V, Valizadeh L, Tadaion F. Individual potentials related to evidence-based nursing among nurses in teaching hospitals affiliated to Tabriz University of Medical Sciences, Tabriz. Iran J Caring Sci. 2012;1(2):93–9.

    PubMed  Google Scholar 

  71. Yip WK, Mordiffi SZ, Majid MS, Ang EKN. Nurses' perspective towards evidence-based practice: a descriptive study. Ann Acad Med Singapore. 2010;39:S372.

    Google Scholar 

  72. Chew ML, Sim KH, Sim YF, Yan CC. Attitudes, skills and knowledge of primary healthcare nurses on the use of evidence-based nursing (EBN) and barriers influencing the use of EBN in the primary healthcare setting. Ann Acad Med Singapore. 2015;1:S503.

    Google Scholar 

  73. Melnyk BM, Fineout-Overholt E, Fischbeck Feinstein N, Li H, Small L, Wilcox L, et al. Nurses' perceived knowledge, beliefs, skills, and needs regarding evidence-based practice: implications for accelerating the paradigm shift. Worldviews Evid Based Nurs. 2004;1(3):185–93.

    Article  PubMed  Google Scholar 

  74. Hellier S, Cline T. Factors that affect nurse practitioners' implementation of evidence-based practice. J Am Assoc Nurse Pract. 2016;28(11):612–21.

    PubMed  Google Scholar 

  75. Connor L, Paul F, McCabe M, Ziniel S. Measuring Nurses' value, implementation, and knowledge of evidence-based practice: further psychometric testing of the quick-EBP-VIK survey. Worldviews Evid Based Nurs. 2017;14(1):10–21.

    Article  PubMed  Google Scholar 

  76. Connor L. Pediatric nurses' knowledge, values, and implementation of evidence-based practice and use of two patient safety goals. Ann Arbor: University of Massachusetts Boston; 2017.

    Google Scholar 

  77. Barako TD, Chege M, Wakasiaka S, Omondi L. Factors influencing application of evidence-based practice among nurses. Afr J Midwifery Women's Health. 2012;6(2):71–7.

    Article  Google Scholar 

  78. Majid MS, Foo S, Luyt B, Zhang X, Theng Y, Chang Y, et al. Adopting evidence-based practice in clinical decision making:nurses’ perceptions, knowledge, and barriers. J Med Libr Assoc. 2011;99(3):8.

    Article  Google Scholar 

  79. Farokhzadian J, Khajouei R, Ahmadian L. Evaluating factors associated with implementing evidence-based practice in nursing. J Eval Clin Pract. 2015;21(6):1107–13.

    Article  PubMed  Google Scholar 

  80. Saunders H, Stevens KR, Vehvilainen-Julkunen K. Nurses' readiness for evidence-based practice at Finnish university hospitals: a national survey. J Adv Nurs. 2016;72(8):1863–74.

    Article  PubMed  Google Scholar 

  81. Gerrish K, Guillaume L, Kirshbaum M, McDonnell A, Tod A, Nolan M. Factors influencing the contribution of advanced practice nurses to promoting evidence-based practice among front-line nurses: findings from a cross-sectional survey. J Adv Nurs. 2011;67(5):1079–90.

    Article  PubMed  Google Scholar 

  82. Gu MO, Ha Y, Kim J. Development and validation of an instrument to assess knowledge and skills of evidence-based nursing. J Clin Nurs. 2015;24(9–10):1380–93.

    Article  PubMed  Google Scholar 

  83. Laibhen-Parkes N. Web-based evidence based practice educational intervention to improve EBP competence among BSN-prepared pediatric bedside nurses: a mixed methods pilot study: Mercer University; 2014.

    Book  Google Scholar 

  84. Melnyk BM, Fineout-Overholt E, Mays MZ. The evidence-based practice beliefs and implementation scales: psychometric properties of two new instruments. Worldviews Evid Based Nurs. 2008;5(4):208–16.

    Article  PubMed  Google Scholar 

  85. Gallagher-Ford L. The influence of nurse leaders and nurse educators on registered nurses' evidence-based practice: Widener University school of nursing; 2012.

    Google Scholar 

  86. Mooney S. The effect of education on evidence-based practice and Nurses' beliefs/attitudes toward and intent to use evidence-based practice: Gardner-Webb University; 2012.

    Google Scholar 

  87. Underhill M, Roper K, Siefert ML, Boucher J, Berry D. Evidence-based practice beliefs and implementation before and after an initiative to promote evidence-based nursing in an ambulatory oncology setting. Worldviews Evid Based Nurs. 2015;12(2):70–8.

    Article  PubMed  Google Scholar 

  88. Baxley M. School nurse's implementation of evidence-based practice: a mixed method study. Ann Arbor: University of Phoenix; 2016.

    Google Scholar 

  89. Bovino LR, Aquila A, Feinn R. Evidence-based nursing practice in a contemporary acute care hospital setting. Nurs Res. 2016;65(2):E50-E.

  90. Dropkin MJ. Review of "the state of evidence-based practice in US nurses". ORL Head Neck Nurs. 2013;31(2):14–6.

    PubMed  Google Scholar 

  91. Eaton LH, Meins AR, Mitchell PH, Voss J, Doorenbos AZ. NIHMS676110; evidence-based practice beliefs and behaviors of nurses providing cancer pain management: a mixed-methods approach. Oncol Nurs Forum. 2015;42(2):165–73.

    Article  PubMed  PubMed Central  Google Scholar 

  92. Estrada N. Exploring perceptions of a learning organization by RNs and relationship to EBP beliefs and implementation in the acute care setting. Worldviews Evid Based Nurs. 2009;6(4):200–9.

    Article  PubMed  Google Scholar 

  93. Estrada NA. Learning organizations and evidence-based practice by RNs: University of Arizona; 2007.

    Google Scholar 

  94. Friesen MA, Brady JM, Milligan R, Christensen P. Findings from a pilot study: bringing evidence-based practice to the bedside. Worldviews Evid Based Nurs. 2017;14(1):22–34.

    Article  PubMed  Google Scholar 

  95. Harper MG, Gallagher-Ford L, Warren JI, Troseth M, Sinnott LT, Thomas BK. Evidence-based practice and U.S. healthcare outcomes: findings from a National Survey with Nursing Professional Development Practitioners. J Nurses Prof Dev. 2017;33(4):170–9.

    Article  PubMed  Google Scholar 

  96. Hauck S, Winsett RP, Kuric J. Leadership facilitation strategies to establish evidence-based practice in an acute care hospital. J Adv Nurs. 2013;69(3):664–74.

    Article  PubMed  Google Scholar 

  97. Kang Y, Yang IS. Evidence-based nursing practice and its correlates among Korean nurses. Appl Nurs Res. 2016;31:46–51.

    Article  PubMed  Google Scholar 

  98. Kaplan L, Zeller E, Damitio D, Culbert S, Bayley KB. Improving the culture of evidence-based practice at a Magnet hospital. J Nurs Professional Dev. 2014;30(6):274–80; quiz E1–2.

    Article  PubMed  Google Scholar 

  99. Kim SC, Ecoff L, Brown CE, Gallo AM, Stichler JF, Davidson JE. Benefits of a regional evidence-based practice fellowship program: a test of the ARCC model. Worldviews Evid Based Nurs. 2017;14(2):90–8.

    Article  PubMed  Google Scholar 

  100. Kim SC, Stichler JF, Ecoff L, Brown CE, Gallo AM, Davidson JE. Predictors of evidence-based practice implementation, job satisfaction, and group cohesion among regional fellowship program participants. Worldviews Evid Based Nurs. 2016;13(5):340–8.

    Article  PubMed  Google Scholar 

  101. Levin RF, Fineout-Overholt E, Melnyk BM, Barnes M, Vetter MJ. Fostering evidence-based practice to improve nurse and cost outcomes in a community health setting: a pilot test of the advancing research and clinical practice through close collaboration model. Nurs Adm Q. 2011;35(1):21–33.

    Article  PubMed  Google Scholar 

  102. Lynch SH. Nurses' beliefs about and use of evidence-based practice: University of Connecticut; 2012.

    Google Scholar 

  103. Macyk I. Staff nurse engagement, decisional involvement, staff nurse participation in shared governance councils and the relationship to evidence based practice belief and implementation. Ann Arbor: Adelphi University; 2017.

    Google Scholar 

  104. Mariano KG, Caley LM, Eschberger L, Woloszyn A, Volker P, Leonard MS, et al. Building evidence-based practice with staff nurses through mentoring. J Neonatal Nurs. 2009;15(3):81–7.

    Article  Google Scholar 

  105. Melnyk BM, Bullock T, McGrath J, Jacobson D, Kelly S, Baba L. Translating the evidence-based NICU COPE program for parents of premature infants into clinical practice: impact on nurses' evidence-based practice and lessons learned. J Perinat Neonatal Nurs. 2010;24(1):74–80.

    Article  PubMed  Google Scholar 

  106. Melnyk BM, Fineout-Overholt E, Gallagher-Ford L, Kaplan L. The state of evidence-based practice in US nurses: critical implications for nurse leaders and educators. J Nurs Adm. 2012;42(9):410–7.

    Article  PubMed  Google Scholar 

  107. Pryse YM. Using evidence based practice: the relationship between work environment, nursing leadership, and nurses at the bedside: Indiana University; 2012.

    Google Scholar 

  108. Rose Bovino L, Aquila AM, Bartos S, McCurry T, Cunningham CE, Lane T, et al. A cross-sectional study on evidence-based nursing practice in the contemporary hospital setting: implications for nurses in professional development. J Nurses Prof Dev. 2017;33(2):64–9.

    Article  PubMed  Google Scholar 

  109. Skela-Savic B, Hvalic-Touzery S, Pesjak K. Professional values and competencies as explanatory factors for the use of evidence-based practice in nursing. J Adv Nurs. 2017;73(8):1910–23.

    Article  PubMed  Google Scholar 

  110. Skela-Savic B, Pesjak K, Lobe B. Evidence-based practice among nurses in Slovenian hospitals: a national survey. Int Nurs Rev. 2016;63(1):122–31.

    Article  CAS  PubMed  Google Scholar 

  111. Son Chae K, Stichler JF, Ecoff L, Gallo A-M, Davidson JE. Six-month follow-up of a regional evidence-based practice fellowship program. J Nurs Adm. 2017;47(4):238–43.

    Article  Google Scholar 

  112. Stokke K, Olsen NR, Espehaug B, Nortvedt MW. Evidence based practice beliefs and implementation among nurses: a cross-sectional study. BMC Nurs. 2014;13(1):8.

    Article  PubMed  PubMed Central  Google Scholar 

  113. Sweetapple C. Change adoption willingness: development of a measure of willingness to adopt evidence-based practice in registered nurses. Ann Arbor: Hofstra University; 2015.

    Google Scholar 

  114. Temple B, Sawatzky-Dickson D, Pereira A, Martin D, McMillan D, Cepanec D, et al. Improving Nurses' beliefs and use of evidence in their practice, Nursing Education and Health Care Organizations. Quebec City: Knowledge Translation (KT) Canada Annual Scientific Meeting; 2014.

  115. Varnell G, Haas B, Duke G, Hudson K. Effect of an educational intervention on attitudes toward and implementation of evidence-based practice. Worldviews Evid Based Nurs. 2008;5(4):172–81.

    Article  PubMed  Google Scholar 

  116. Verloo H, Desmedt M, Morin D. Beliefs and implementation of evidence-based practice among nurses and allied healthcare providers in the Valais hospital. Switzerland J Eval Clin Pract. 2017;23(1):139–48.

    Article  PubMed  Google Scholar 

  117. Wang SC, Lee LL, Wang WH, Sung HC, Chang HK, Hsu MY, et al. Psychometric testing of the Chinese evidence-based practice scales. J Adv Nurs. 2012;68(11):2570–7.

    Article  PubMed  Google Scholar 

  118. Warren JI, McLaughlin M, Bardsley J, Eich J, Esche CA, Kropkowski L, et al. The strengths and challenges of implementing EBP in healthcare systems. Worldviews Evid Based Nurs. 2016;13(1):15–24.

    Article  PubMed  Google Scholar 

  119. Warren JI, Montgomery KL, Friedmann E. Three-year pre-post analysis of EBP integration in a magnet-designated community hospital. Worldviews Evid Based Nurs. 2016;13(1):50–8.

    Article  PubMed  Google Scholar 

  120. Bostrom AM, Rudman A, Ehrenberg A, Gustavsson JP, Wallin L. Factors associated with evidence-based practice among registered nurses in Sweden: a national cross-sectional study. BMC Health Serv Res. 2013;13:165.

    Article  PubMed  PubMed Central  Google Scholar 

  121. Gerrish K, Clayton J. Promoting evidence-based practice: an organizational approach. J Nurs Manag. 2004;12(2):114–23.

    Article  PubMed  Google Scholar 

  122. Gerrish K, Ashworth P, Lacey A, Bailey J, Cooke J, Kendall S, et al. Factors influencing the development of evidence-based practice: a research tool. J Adv Nurs. 2007;57(3):328–38.

    Article  PubMed  Google Scholar 

  123. Gerrish K, Ashworth P, Lacey A, Bailey J. Developing evidence-based practice: experiences of senior and junior clinical nurses. J Adv Nurs. 2008;62(1):62–73.

    Article  PubMed  Google Scholar 

  124. Mills J, Field J, Cant R. The place of knowledge and evidence in the context of Australian general practice nursing. Worldviews Evid Based Nurs. 2009;6(4):219–28.

    Article  PubMed  Google Scholar 

  125. Baird LM, Miller T. Factors influencing evidence-based practice for community nurses. Br J Community Nurs. 2015;20(5):233–42.

    Article  PubMed  Google Scholar 

  126. Shin JI, Lee E. The influence of social capital on nurse-perceived evidence-based practice implementation in South Korea. J Nurs Scholarsh. 2017;49(3):267–76.

    Article  PubMed  Google Scholar 

  127. Cato DL. The relationship between a nurse residency program and evidence-based practice knowledge of the incumbent nurse across a multihospital system: a quantitative correlational design: Capella University; 2013.

    Google Scholar 

  128. Hagler D, Mays MZ, Stillwell SB, Kastenbaum B, Brooks R, Fineout-Overholt E, et al. Preparing clinical preceptors to support nursing students in evidence-based practice. J Contin Educ Nurs. 2012;43(11):502–8.

    Article  PubMed  Google Scholar 

  129. Smith-Keys S. Education and mentoring of staff nurses in evidence based practice. Ann Arbor: Walden University; 2016.

    Google Scholar 

  130. Thorsteinsson HS. Translation and validation of two evidence-based nursing practice instruments. Int Nurs Rev. 2012;59(2):259–65.

    Article  CAS  PubMed  Google Scholar 

  131. Thorsteinsson HS. Icelandic nurses' beliefs, skills, and resources associated with evidence-based practice and related factors: a national survey. Worldviews Evid Based Nurs. 2013;10(2):116–26.

    Article  PubMed  Google Scholar 

  132. Thorsteinsson HS, Sveinsdottir H. Readiness for and predictors of evidence-based practice of acute-care nurses: a cross-sectional postal survey. Scand J Caring Sci. 2014;28(3):572–81.

    Article  PubMed  Google Scholar 

  133. Hain D, Haras M. Continuing nursing education. Changing nephrology Nurses' beliefs about the value of evidence-based practice and their ability to implement in clinical practice. Nephrol Nurs J. 2015;42(6):563–7.

    PubMed  Google Scholar 

  134. Park JW, Ahn JA, Park MM. Factors influencing evidence-based nursing utilization intention in Korean practice nurses. Int J Nurs Pract. 2015;21(6):868–75.

    Article  PubMed  Google Scholar 

  135. Ruzafa-Martinez M, Lopez-Iborra L, Madrigal-Torres M. Attitude towards evidence-based nursing questionnaire: development and psychometric testing in Spanish community nurses. J Eval Clin Pract. 2011;17(4):664–70.

    Article  PubMed  Google Scholar 

  136. Almaskari M. Omani staff nurses' and nurse leaders' attitudes toward and perceptions of barriers and facilitators to the implementation of evidence-based practice. Ann Arbor: Widener University; 2017.

    Google Scholar 

  137. Thiel L, Ghosh Y. Determining registered nurses' readiness for evidence-based practice. Worldviews Evid Based Nurs. 2008;5(4):182–92.

    Article  PubMed  Google Scholar 

  138. Squires JE, Hutchinson AM, Bostrom AM, Deis K, Norton PG, Cummings GG, et al. A data quality control program for computer-assisted personal interviews. Nurs Res Pract. 2012;2012:303816.

    PubMed  PubMed Central  Google Scholar 

  139. Nunnally JC. Psychometric theory. 2nd ed. New York, NY: McGraw-Hill; 1978.

    Google Scholar 

  140. Streiner DL. A checklist for evaluating the usefulness of rating scales. Can J Psychiatr Rev Can Psychiatr. 1993;38(2):140–8.

    Article  CAS  Google Scholar 

  141. Optimizing the Role of Nursing in Home Health. Ottawa, ON: Canadian Nurses Association; 2013. https://cna-aiic.ca/~/media/cna/page-content/pdf-en/optimizing_the_role_of_nursing_in_home_health_e.pdf?la=en.

  142. Martin-Misener R, Bryant-Lukosius D. Optimizing the role of nurses in primary Care in Canada Final Report. Ottawa: Canadian Nurses Association; 2014.

    Google Scholar 

  143. Gibbard R. Sizing up the challenge. Meeting the demand for Long-term Care in Canada. Ottawa: conference Board of Canada; 2017. Contract No: Report.

  144. Cheetham G, Chivers G. The reflective (and competent) practitioner: a model of professional competence which seeks to harmonise the reflective practitioner and competence-based approaches. J Eur Ind Train. 1998;22(7):267–76.

    Article  Google Scholar 

  145. Cowan D, Norman I, Coopamah V. Competence in nursing practice: a controversial concept - a focused review of literature. Accid Emerg Nurs. 2007;15:20–6.

    Article  PubMed  Google Scholar 

  146. Gonczi A. Competency based assessment in the professions in Australia. Assessment Educ Principles Policy Practice. 1994;1(1):27–44.

    Article  Google Scholar 

  147. Baartman L, Bastiaens T, Kirschner P, van der Vleuten C. Evaluating assessment quality in competence-based education: a qualitative comparison of two frameworks. Educ Res Rev. 2007;2:114–29.

    Article  Google Scholar 

  148. Eraut M. Developing professional knowledge and competence. Washington, D.C.: Falmer Press; 1994.

    Google Scholar 

  149. Hand H. Promoting effective teaching and learning in the clinical setting. Nurs Stand. 2006;20(39):55–63.

    Article  CAS  PubMed  Google Scholar 

  150. Eraut M. Concepts of competence. J Interprof Care. 1998;12(2):127–39.

    Article  Google Scholar 

  151. Meretoja R, Numminen O, Isoaho H, Leino-Kilpi H. Nurse competence between three generational nurse cohorts: a cross-sectional study. Int J Nurs Pract. 2015;21:350–8.

    Article  PubMed  Google Scholar 

  152. Numminen O, Meretoja R, Isoaho H, Leino-Kilpi H. Professional competence of practising nurses. J Clin Nurs. 2013;22:1411–23.

    Article  PubMed  Google Scholar 

  153. Fitzpatrick R, Davey C, Buxton M, Jones D. Evaluating patient-based outcome measures for use in clinical trials. United Kingdom, Europe: NHS R&D HTA Programme; 2007.

  154. Bing-Jonsson P, Hofoss D, Kirkevold M, Bjørk IT, Foss C. Nursing older people-competence evaluation tool: development and psychometric evaluation. J Nurs Meas. 2015;23(1):127–53.

    Article  PubMed  Google Scholar 

  155. Kalisch BJ, Lee H, Salas E. The development and testing of the nursing teamwork survey. Nurs Res. 2010;59(1):42–50.

    Article  PubMed  Google Scholar 

  156. Devon HA, Block ME, MoyleWright P, Ernst DM, Hayden SJ, Lazzara DJ, et al. A psychometric toolbox for testing validity and reliability. J Nurs Scholarsh. 2007;39(2):155–64.

    Article  PubMed  Google Scholar 

  157. Nunnally JC. Psychometric theory. New York: McGraw-Hill; 1967.

    Google Scholar 

  158. Streiner D. Starting at the beginning: an introduction to coefficient alpha and internal consistency. J Pers Assess. 2003;80(1):99–103.

    Article  PubMed  Google Scholar 

  159. Upton D, Upton P, Scurlock-Evans L. The reach, transferability, and impact of the evidence-based practice questionnaire: a methodological and narrative literature review. Worldviews Evid Based Nurs. 2014;11(1):46–54.

    Article  PubMed  Google Scholar 

  160. Leung K, Trevena L, Waters D. Development of an appraisal tool to evaluate strength of an instrument or outcome measure. Nurse Res. 2012;20(2):13–9.

    Article  PubMed  Google Scholar 

  161. McKenna H, Treanor C, O'Reilly D, Donnelly M. Evaluation of the psychometric properties of self-reported measures of alcohol consumption: a COSMIN systematic review. Subst Abuse Treat Prev Policy. 2018;13(1):1–19.

    Article  Google Scholar 

  162. Belita, E, Yost J, Squires JE, Ganann R, Dobbins M. Measures assessing attributes of evidence informed decision-making competence among nurses: A psychometric systematic review. 10th Knowledge Translation (KT) Canada Annual Scientific Meeting; 2019; Winnipeg, MB, Canada.

Download references

Acknowledgements

The authors gratefully acknowledge the support of: Ms. Laura Banfield (Health Sciences Librarian) with the development of the search strategy; Ms. Donna Fitzpatrick-Lewis (Research Co-ordinator) and Ms. Sharon Peck-Reid (Research Assistant) from McMaster Evidence Review and Synthesis Team for their support with reference management and preparing for data synthesis; and Ms. Tiffany Dang (TD) for support with data extraction. The abstract for this manuscript was presented at the 10th Knowledge Translation Canada Annual Scientific Meeting [162].

Funding

The study itself did not receive any specific formal funding.

Author information

Authors and Affiliations

Authors

Contributions

EB, JY, JES, RG, MD contributed to the design of the systematic review. EB and TB conducted reference screening. EB and TB conducted data extraction. EB drafted initial manuscript. All authors reviewed and provided feedback on manuscript drafts. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Emily Belita.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

JY is an independent contractor with the American College of Physicians. The other authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Electronic database search strategy Identifies key words used for each primary database searched.

Additional file 2.

Theoretical and empirical literature to guide data analysis of sources of validity evidence Tables used to determine supporting validity evidence for data extracted.

Additional file 3.

Included studies Description of included studies.

Additional file 4.

Sources of validity evidence for each measure Identifies supporting data for each source of validity evidence established.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Belita, E., Squires, J.E., Yost, J. et al. Measures of evidence-informed decision-making competence attributes: a psychometric systematic review. BMC Nurs 19, 44 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12912-020-00436-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12912-020-00436-8

Keywords