Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Development and Psychometric Evaluation of an Instrument to Assess Cross-Cultural Competence of Healthcare Professionals (CCCHP)

  • Gerda Bernhard ,

    Gerda.Laengst@gmx.de; Gerda.Laengst@med.uni-heidelberg.de

    Current address: Department of General Practice and Health Services Research, University Hospital of Heidelberg, Heidelberg, Germany

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Ronald A. Knibbe,

    Affiliation Department of Health Promotion, Maastricht University, Maastricht, The Netherlands

  • Alessa von Wolff,

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Demet Dingoyan,

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Holger Schulz,

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

  • Mike Mösko

    Affiliation Department of Medical Psychology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany

Abstract

Background

Cultural competence of healthcare professionals (HCPs) is recognized as a strategy to reduce cultural disparities in healthcare. However, standardised, valid and reliable instruments to assess HCPs’ cultural competence are notably lacking. The present study aims to 1) identify the core components of cultural competence from a healthcare perspective, 2) to develop a self-report instrument to assess cultural competence of HCPs and 3) to evaluate the psychometric properties of the new instrument.

Methods

The conceptual model and initial item pool, which were applied to the cross-cultural competence instrument for the healthcare profession (CCCHP), were derived from an expert survey (n = 23), interviews with HCPs (n = 12), and a broad narrative review on assessment instruments and conceptual models of cultural competence. The item pool was reduced systematically, which resulted in a 59-item instrument. A sample of 336 psychologists, in advanced psychotherapeutic training, and 409 medical students participated, in order to evaluate the construct validity and reliability of the CCCHP.

Results

Construct validity was supported by principal component analysis, which led to a 32-item six-component solution with 50% of the total variance explained. The different dimensions of HCPs’ cultural competence are: Cross-Cultural Motivation/Curiosity, Cross-Cultural Attitudes, Cross-Cultural Skills, Cross-Cultural Knowledge/Awareness and Cross-Cultural Emotions/Empathy. For the total instrument, the internal consistency reliability was .87 and the dimension’s Cronbach’s α ranged from .54 to .84. The discriminating power of the CCCHP was indicated by statistically significant mean differences in CCCHP subscale scores between predefined groups.

Conclusions

The 32-item CCCHP exhibits acceptable psychometric properties, particularly content and construct validity to examine HCPs’ cultural competence. The CCCHP with its five dimensions offers a comprehensive assessment of HCPs’ cultural competence, and has the ability to distinguish between groups that are expected to differ in cultural competence. This instrument can foster professional development through systematic self-assessment and thus contributes to improve the quality of patient care.

Introduction

European countries, including Germany, are becoming increasingly ethnically and culturally diverse, resulting from the rise of international immigration and asylum applications. Inequalities in health, and access to healthcare between migrants and local populations in Europe, have been noted [1]. Many studies have suggested that cultural differences influence communication between patients and providers [2], resulting in inadequate diagnostic testing [3], miscommunications about etiologies, inadequate treatment plans [4], and disregard of patients’ thoughts and ideas [5].

Cultural competence has gained national and international attention as a strategy to reduce cultural disparities in health and healthcare [6]. The utilization of cultural competency training as a means to improve the cultural competence of healthcare providers and address health disparities is well documented [7]. However, an often-cited weakness of cultural competency training is the lack of standardised and validated instruments to assess its impact [8, 9]. The assessment of cultural competence of healthcare professionals involved in direct patient care (HCPs; i.e. physicians, clinicians, psychotherapists, psychologists, midwives and nurses) is important to determine individual strengths and weaknesses, leading to self-awareness [10] and is therefore a necessary, effective and systematic way to plan for and integrate cultural competency in healthcare organisations (e.g. hospitals, primary care services) [11].

Although a vast amount of international literature on cultural competence exists, there is considerable confusion about what constitutes as cultural competence. A variety of academic disciplines, including healthcare and nursing [12, 13], counselling [14], social work [15], and other health professions such as occupational therapy [16] and rehabilitation [17] have laid claim to the construct of cultural competence and produced a number of models. The most widely used conceptualisation of culturally competent practice is based on a three-dimensional model (beliefs and attitudes, knowledge, and skills) presented by Sue et al. [18]. While many academics agree that cultural competence comprises knowledge, skills, and attitudes, the definition and operationalisation of cultural competence differs greatly between studies and instruments [19, 20]. Moreover, consultation with members of the target population (HCPs) and experts for the purpose of conceptualizing the construct of cultural competence is underused in existing instruments. Furthermore, the majority of cultural competence models and instruments have been developed in the United States and reflect the socio-cultural and political context in which they were developed [21, 22]. Thus, models need to be further defined, adapted, and researched for an effective application in the European context where healthcare systems and cultural diversity of the population are different [21, 23].

Empirical measurement of cultural competence is currently conceptualised as a multidimensional construct that consists of distinct, yet related, factors. Most instruments were thereby developed for a particular group of HCPs (e.g. physicians or nurses) [7, 8]. According to several reviews [24, 25], the four most frequently cited instruments include the Multicultural Counselling Inventory (MCI) [26], the Cross-Cultural Counselling Inventory-Revised (CCCI-R) [27], the Multicultural Awareness-Knowledge-and-Skills Survey (MAKSS, MAKSS-CE-R) [28, 29], and the Multicultural Counselling Knowledge and Awareness Scale (MCAS-B, MCKAS) [30, 31]. All of these instruments use a brief format, multi-factorial design and were developed in the United States for the counselling profession. All except the CCCI-R, an observer-rated measure, are self-report instruments. In general, these instruments are based on Sue et al.’s [18] model and primarily assess the level of awareness, knowledge and skills counsellors possess in working with diverse clientele in a culturally sensitive manner.

Despite considerable advances in cultural competence research, several limitations of current instruments have been noted. One limitation is caused by the insufficient amount of psychometric data available. Recent reviews claim that additional factor analytic and validation studies are needed to improve the overall validity and utility of existing instruments [8, 9, 20, 32]. Although the MCKAS, MCI and CCCI-R were conceptually rooted in the tripartite model proposed by Sue et al. [18], factor analytic studies did not support a three-dimensional structure [24, 33]. For instance, the MCI revealed a four-factor model (accounted for 36% of variance) with unknown test-retest stability [24, 26], the MCKAS showed a two-factor model (accounted for 32% of variance) with questionable criterion validity [31], and the factor structure of the CCCI-R demonstrated both a one-factor and three-factor solution [24]. Other potential concerns with existing instruments include high correlations between dissimilar subscales within an instrument, a lack of clarity of constructs and limitations related to self-report measures (e.g. social desirability) [20].

Currently, no instrument is available to assess cultural competence of different HCPs within a German healthcare context. Thus, this study aimed to 1) identify the core components of cultural competence by incorporating the perspectives of practising HCPs and experts, 2) develop a self-report instrument to assess cultural competence of different HCPs and 3) to evaluate the psychometric properties of the new instrument. It was furthermore hypothesized that this instrument’s items would extend the three-dimensional model underlying the most frequently cited instruments.

Methods

The development and psychometric evaluation of the cross-cultural competence instrument for the healthcare profession (CCCHP) involved a systematic process [3437] and proceeded in two phases with ten steps: 1) the instrument development phase including six steps, and 2) the psychometric evaluation phase including four steps (S1 Fig). Methods and results for each phase will be presented sequentially. A detailed description of the methods is presented as supporting information (S1 Methods).

Phase 1: Instrument development

In line with Lynn [35] and Liu et al. [37], instrument content was established by a review of the cultural competence literature and existing instruments, an expert survey and interviews with HCPs. Subsequently, items were generated to constitute a preliminary instrument.

Step 1 began with a narrative literature review which aimed to 1) identify conceptual models and existing instruments of cultural competence within a healthcare setting, 2) identify the construct and content domains distinguished, as well as 3) to locate items in existing instruments. Computerized searches of 12 bibliographic databases, Google Scholar and hand searches yielded more than 900 studies.

More than 120 peer-reviewed articles on cultural competence models were identified. Eventually, 14 articles representing unique cultural competence models were retrieved. The most common components of the identified models were cognitive and behavioural components, with few addressing contextual elements. Cultural competence models were later used to guide the development of the CCCHP model.

The database and hand searches yielded 135 citations on cultural competence assessment instruments. For each instrument located, information on the instrument’s name, target population, number of items, dimensions and response scale were retrieved. Twenty-nine instruments were fully available, whereof the vast majority were self-report forms (93%), originated from the United States (86%) and were developed for the field of nursing (21%), counselling (28%) or healthcare (28%). The majority focused on cultural skills, awareness, and knowledge, whereas some also measured one or more of the following: multicultural counselling relationship, cultural desire, cultural encounters, flexibility/openness, emotional resilience, and cultural sensitivity. Two researchers assessed the instruments separately investigating 1) their purpose (designed to evaluate cultural competence of individual HCPs), 2) relevance to healthcare practice in Germany, 3) psychometric properties, 4) conceptualisation of cultural competence, and 5) items used to represent the domains. The researchers judged the appropriateness of each instrument on a 3-point rating scale (not relevant to relevant). Disagreements on instrument appraisal were resolved by consensus. Six instruments were rated relevant, 14 were rated somewhat relevant and nine instruments were excluded as they were found to be irrelevant.

Next, an expert survey and interviews with different HCPs were performed. This approach ensured that a broad range of perspectives were included to inform the development of a conceptual model for the CCCHP. Experts and HCPs were asked to 1) define, from their perspective, cultural competence of HCPs, 2) identify the relevant components of cultural competence and 3) suggest items to capture the relevant components. In total, 23 experts (response rate: 18%) participated in an online survey. To explore the perspectives of practising HCPs, twelve semi-structured interviews were conducted with physicians, medical specialists, registered nurses, midwives and psychotherapists from healthcare settings in Hamburg, Germany. These HCPs were purposefully selected, as they represent the target group for the new instrument. Professionals were required to be working with culturally diverse patients. Of the 12 respondents, half had a migration background and half participated in cultural competence training.

Data analyses were carried out separately for the expert survey and HCPs interviews using qualitative content analysis (QCA) [38]. After comparison of the themes in both the HCP and expert group, the researchers determined that the data exhibited common themes. Consequently, themes were combined to achieve a collective perspective. The results of this QCA were the basis for the development of the CCCHP’s cultural competence model.

Sixteen main categories of cultural competence derived from the QCA were grouped into dimensions by the research team. These categories were structured into five dimensions: attitudes, knowledge, awareness/self-reflection, motivation/emotion, and skills. These dimensions were further classified into the three domains: cognitive, behavioural, and affective (S1 Fig Step 1).

In Step 2, items were generated based on suggestions of the experts and HCPs as well as the 20 cultural competence instruments rated as relevant or somewhat relevant. Suggestions provided by experts and HCPs guided the selection of items from existing instruments. An initial item pool of 384 items was generated, based on the instruments (81%), items from experts (6%) and from HCPs (13%). Only individual items were included from existing instruments; scales or subscale as a whole were not incorporated. The item pool was systematically reduced in four assessment rounds by three researchers. The remaining 115 items were translated and language adaptations of items were made to suit the German healthcare system usage.

It has been stressed that self-report instruments of cultural competence may be susceptible to social desirability [39]. To identify individual cultural competence (cc) items that are prone to elicit socially desirable responding, six items (SD markers) of the Social-Desirability Scale-17 (SDS-17) [40] were found to be appropriate in terms of content (face validity) and were modified (i.e. culturally adjusted) for inclusion in the CCCHP.

A 5-point Likert Scale (from strongly agree to strongly disagree) response format with a ‘no answer possible option’ was selected for the 115 cc items and 6 SD items. This scale was chosen as it allowed a neutral midpoint for respondents who may truly be indifferent with their degree of agreement regarding an item. A great number of responses in the neutral midpoint of an item would indicate that this item needs to be rephrased to better differentiate.

In Step 3, the draft 115-item instrument and the 6 SD items were administered to a convenience sample [41] of researchers and psychology students (n = 13). The respondents judged how representative the individual items were of the construct content domain [42], evaluated item clarity, checked for redundancies and suggested revisions for item/instrument construction. This work yielded an 85-item instrument (plus 6 SD items) that was provided to experts for validation.

Step 4 served to establish face validity of the CCCHP’s content. An expert panel (n = 5, response rate: 33%) was used to evaluate whether the items measure what they were intended to measure [43]. Quantitatively, experts were asked to judge 1) the relevance of each item to measure the respective dimension [42], and 2) if the item was correctly classified to capture the respective dimension. Qualitatively, comments on the clarity and meaning of item construction and wording were also elicited, including suggestions for modifications and refinement [42, 44].

Step 5 yielded the 59-item self-report CCCHP which is distributed across five dimensions. Five of these items originated from existing instruments. Six SD items were embedded to identify individual cc items that contained socially desirable content. Each item was measured with a 5-point Likert-type response scale from 1 = strongly agree to 5 = strongly disagree and a ‘no answer possible’ option. Sum scores are calculated for each subscale. In Step 6, two online surveys of the CCCHP (only differing in background information) were established for the groups of respondents in the psychometric survey. The CCCHP -59 as well as the 6 SD items were presented in a forced-choice response format. To examine usability, the online surveys were pre-tested by 12 psychology students.

Phase 2: Psychometric evaluation of the CCCHP

Study participants.

After minor revisions, the 59-item CCCHP was tested for its psychometric properties with two prospective HCP groups: medical students (MS; ≥5th semester and thus in the clinical part of their studies and practical year) and psychologists in advanced psychotherapeutic training (PiA). Both groups were selected as they constitute the later target group for the CCCHP and have patient contact and clinical experience, at this stage of their professional development. In addition to the CCCHP, socio-demographic (e.g. migration background [45]) and occupational/educational information were collected anonymously for the psychometric survey.

An email invitation explaining the purpose of the study was sent to student representatives and dean’s offices of Medical Universities throughout Germany (n = 34). To reach PiAs, requests were sent to different networks and to all 180 PiA training institutions in Germany. As an incentive, participants were offered the opportunity to register for a lottery to win one of ten €20 gift certificates.

Data analysis.

Analyses focused on testing 1) the suitability of the data set, 2) the instrument’s dimensionality (PCA; principal component analysis), 3) reliability, and 4) discriminating power (known-groups technique) [41] undertaken with the Statistical Package for Social Sciences version 18.0 (SPSS® Inc., Chicago, IL, USA).

In Step 1, pre-analysis checks were executed to ensure the suitability of the data set for factor analysis [46]. The checks included determining the stability of the emerging factor structure, adequacy of sample size, item scaling, skewness and kurtosis of item distribution and the appropriateness of the correlation matrix. To evaluate the instruments practicability and acceptance, response rates, time to complete and feedback from respondents were analysed. In the statistical analyses, the ‘no answer possible’ option (forced-choice responses) was coded as missing. Missing values of ≥30% in any item or respondent were considered to be inadequate for inclusion and were thus eliminated from further analysis. Concerning each item’s distribution, the maximum acceptable proportion of floor and ceiling effects among items as well as subscales was <80% [47].

In Step 2, PCA were performed on the CCCHP-59 (plus 6 SD items) with orthogonal rotation (varimax with Kaiser normalisation) to determine the dimensionality of the instrument. Prior to PCA, the suitability of the data for factor analysis was assessed by the application of Bartlett’s test of sphericity [48] and the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy [49, 50]. Criteria for extraction of CCCHP components as proposed by Kline [51] and others [52, 53] included: 1) eigenvalues >1, 2) Cattell’s [54] scree test, 3) interpretability of the solution, using a component loading cut-off of .39 and no cross-loadings greater than or equal to .40, and 4) percentages of explained variance (minimum of 5% reported variance) per component. Moreover, methodologists have recommended that at least three to five items represent each component [55]. In this study, the minimum amount of items loading on each component was set to four. Items were excluded if they met one of the following criteria: 1) weak loadings (failing to load above .39 on any component), 2) general loadings of .40 on more than one component, 3) ≥30% of responses were missing or 4) ≥80% of item responses were the same (floor/ceiling effect).

In Step 3, internal consistency reliability was assessed for the CCCHP using Cronbach’s α to address the homogeneity of items in a scale. The criteria for Cronbachs α as described by DeVellis [56], were applied for the interpretation of results (α of >.70). The SD items were included in the PCA 1) to evaluate the reliability and factorial validity of the reduced and modified SD scale and 2) to identify cc items with qualities of SD items.

In Step 4, in addition to PCA, construct validity of the CCCHP was assessed by using the known-groups technique [41]. Three subgroups were predefined based on theoretically expected differences in cultural competence. On the assumption that having a migration background, frequent cross-cultural encounters and participating in cross-cultural competence training would lead to higher scores on the CCCHP subscales, mean scores were calculated. Higher mean scores indicate a higher motivation, more positive attitudes and emotions, greater frequency of using culturally competent skills, and higher levels of knowledge. Higher mean scores on the SD subscale indicate a higher tendency to provide socially desirable responses. The independent sample t-test was used to analyse differences between group means on each CCCHP subscale and the SD subscale [37]. The significance level was set at p < .05.

Ethics Statement.

This study is part of the international research project on ‘Mental Health and Migration’ (www.segemi.de), approved by the Ethics Committee of the Hamburg Chamber of Psychotherapists, Germany. All participants received written information regarding the study according to the principles outlined in the Declaration of Helsinki. In the online psychometric survey, the written information contained a passage informing the participants that they declare their consent by completing the CCCHP-59.

Results

Study participants

A convenience sample of 409 medical students (MS) and 336 PiA participated. One-fifth (21.3%) of the MS and one-third (30.9%) of the PiA in this study had a migration background. A foreign nationality was recorded for 6.9% of the MS and 8.7% of the PiA. Almost all respondents (MS: 99.0% vs. PiA: 93.5%) stated to have patient contact, although the length of contact differed significantly between the groups. Regarding cross-cultural competence training, 9.8% of the MS and 22% of the PiA reported participating in training activities. There was a significant difference between MS and PiA in demographic characteristics, including age, gender, migration background, staying abroad and participation in cultural competence training (p < .05). Demographic characteristics for the MS and PiA sample are shown in Table 1.

thumbnail
Table 1. Demographic characteristics of the medical students and PiA sample.

https://doi.org/10.1371/journal.pone.0144049.t001

In Step 1, the pre-analysis checks proposed by Ferguson and Cox [46] were applied to the CCCHP-59 (plus 6 SD items). The sample size was adequate, with an acceptable minimum ratio of participants to items (N/p ratio) of 11:1 [58, 59]. Furthermore, the ratio of participants to expected factors (N/m ratio) exceeded the acceptable minimum with 123:1. The minimum ratio of items to expected factors (p/m ratio) was achieved with a ratio of 10:1 [60].

All except two CCCHP items (#47, #57) showed an acceptance rate above 70%. Mean completion time for the total survey (including background information) was 16.5 minutes (SD: ± 5.5 minutes). On average, 61 of the 65 items (96.2%) were completed. For 51 (78.5%) of the 65-items, the percentage of missing data was lower than 5%. The initial data analysis led to the exclusion of two items (#47, #57) with unacceptable levels of missing data (≥30%), and seven respondents omitting more than 19 items (≥30%). Missing values were excluded pairwise in the PCA.

According to Gorsuch [61], exploratory factor analysis appears to be relatively robust against violations of normality, and no problems with excessive item skewness (absolute values above 3) or kurtosis (absolute values above 10) were found [62].

In Step 2, the inspection of the correlation matrix revealed the presence of coefficients ≥.30 and indicated the absence of multicollinearity. The KMO measure exceeded the recommended value of 0.60 [50] with 0.888. Bartlett’s test of sphericity [48] was statistically significant, supporting the adequacy of the data and sampling for factor analysis (x2 = 9227.795, df = 1953, p < .001). An initial unrotated PCA performed on the 57 CCCHP items and the SD items revealed the presence of 15 components with eigenvalues >1. Together these 15 components accounted for 54.4% of the total variance. On the basis of Cattell’s [54] scree test, an approximate solution of six components was indicated. The extraction of four, five, six, and seven components was considered. Both orthogonal and oblique rotations were conducted. Absolute correlations between components with the oblique solution ranged from -.045 to -.367 with only 1 of the 15 correlations greater than .30. Tabachnick and Fidell [53] suggested a cut-off of .30 to adopt an oblique rotation. Given that most of the correlations between components were relatively small, the orthogonal (varimax) rotation was deemed to be appropriate as it provided the clearest and most interpretable solution.

In line with an iterative process described in detail previously [52, 63], items with cross-loadings (of .40 on more than one component) and weak loadings (failing to load above .39 on any component) were excluded one at a time from the initial unrotated and subsequent rotated PCA. Following this item reduction process, a re-run of the PCA using orthogonal (varimax) rotation led to a six component solution. In order to reach purity of subscales and meaningfulness of content, 31 items were excluded. Thus, a final 32-item solution, accounting for 50.2% of the variance was identified. Over 50% of the items that failed to meet the eligibility criteria for the final 32-item solution (CCCHP-27 plus 5-item SD subscale) reflected cognitive aspects (knowledge and awareness) of cultural competence. The six-component orthogonal solution was judged to yield the most interpretable solution (Table 2). None of the 27 cc items included were literally translated from existing questionnaires.

thumbnail
Table 2. Principal component analysis with varimax rotation.

Final six-component solution with Cronbach’s α of each component.

https://doi.org/10.1371/journal.pone.0144049.t002

Interpretation of components

The rationale used for naming the six components was partly guided by the recommendations of Ferguson and Cox [46] and Cattell [60], to use marker items to drive the process of labelling each component which emerged from the analysis. Furthermore, the hypothesized components were defined and named before the analysis, and therefore did not rely on post hoc explanations (Table 3).

thumbnail
Table 3. Components of the CCCHP-27 with corresponding items (English).

https://doi.org/10.1371/journal.pone.0144049.t003

Component 1 (Cross-Cultural Motivation/Curiosity (CC-MC)).

The first component (Cronbach α .84, M = 4.14, SD = 0.560) included nine items (#42, #12, #64, #1, #17, #29, #10, #38, #58) with moderately high loadings between .435 and .725, accounting for 12.3% of the total variance. The items referred to HCPs’ motivation to provide culturally responsive care, to their curiosity to engage in cross-cultural encounters and a wish to enrich their understanding in working with culturally different populations. Seven of these items were originally hypothesised to reflect professionals’ motivation, while two items represent HCPs’ openness towards cultural diversity.

Component 2 (Cross-Cultural Attitudes (CC-A)).

Four items (#8, #43, #60, #21) with relatively high and specific loadings between .667 and .765 loaded on component 2 (Cronbach α .780, M = 3.11, SD = 0.760), accounting for 8.7% of the total variance. All four items were originally hypothesised to represent attitudes such as tolerance, valuing and respecting differences, and having a positive orientation towards other cultures and cultural diversity.

Component 3 (Cross-Cultural Skills (CC-S)).

The third component (Cronbach α .684, M = 3.90, SD = 0.566) included five items (#50, #5, #44, #51, #53) with loadings between .493 and .695 accounting for 8.5% of the total variance. This component considered HCPs’ adaptability in meeting the (cultural) needs of their patients, professionals’ communicative competence and to make time for their patients. Four items were originally hypothesised to reflect culturally competent skills, whereas one item represented knowledge to incorporate patients’ concepts of health and illness in the course of the treatment.

Component 5 (Cross-Cultural Emotions/Empathy (CC-EE)).

The fifth component (Cronbach α .694, M = 3.50, SD = 0.648) consisted of five items (#55, #18, #63, #26, #48) with loadings ranging from .456 to .694, accounting for 6.9% of the total variance. The items referred to feelings and emotional reactions towards diversity, with being comfortable with difficulties arising in cross-cultural encounters and with being multicultural empathic. Two of these items were originally hypothesised to reflect emotions (e.g. to feel comfortable and xenophilous), whereas three items reflected skills (e.g. empathy, sensitivity and patience).

Component 6 (Cross-Cultural Knowledge/Awareness (CC-KA)).

Four items (#30, #9, #11, #25) loaded between .405 and .713 on the sixth component (Cronbach α .543, M = 4.26, SD = 0.561) and accounted for 5.6% of the total variance. The items referred to cultural and migration-specific knowledge, to an understanding of concepts of illness and health, and to an awareness of one’s own perceptions and values. Three of these items were originally assumed to represent knowledge and one item was associated with professionals’ self-awareness.

Component 4 (Social Desirability (SD)).

The fourth component (Cronbach α .714, M = 3.96, SD = 0.554) represented the culturally adjusted social desirability scale, as four of the six SD items (#61, #36, #28, #31) were included. In total, five items with moderately high loadings between .438 and .686 represented the SD scale and accounted for 8.3% of the total variance. One other item (#19) loaded on the SD scale. The item was originally hypothesized to reflect cross-cultural attitudes.

Descriptive statistics for each subscale were derived. Acceptance, item means and standard deviations as well as skewness and kurtosis for items included in the final scale are shown in Table 4. The item analyses of the final subscales showed positive endorsement of the majority of items and the subscales were positively skewed. Positive endorsement rates were highest for the motivation subscale, especially for two items (#42, #12), and the knowledge/awareness subscale (#30, #25). No items showed floor effects in any of the subscales.

In Step 3, Cronbach’s α were calculated for the total sample and subscale separately. Cronbach’s α for the overall scale was .869 and subscale α varied between .543 and .836. The internal consistency coefficient for the CC-KA subscale was low (α .543).

Six SD items were included to identify CCCHP items with socially desirable content. The mean response (range 1–5) to SD items was 3.82 (SD = 0.537). A higher mean response reflected a higher tendency to provide answers in a socially desirable manner. Spearman’s coefficient correlations (rs) between individual CCCHP items and the SD items as well as correlations between the CCCHP subscales and the a priori defined 6-item SD scale were examined. Correlations of the attitudes and knowledge/awareness subscales with the SD items and SD subscale were trivial in terms of magnitude with a majority being statistically non-significant. However, there was a significant relationship between four CCCHP items (#19, #48, #50, #63) and the six SD items. The absolute values of correlations with these items ranged from .12 to .36 and the average correlation was .25 (all ps < .001). The absolute values between the 6-item SD scale (Cronbach α .722) and the CCCHP subscales ranged from .11 to .45 and the average correlation was .25. The SD scales’ highest correlation was with the CC-EE subscale (rs = .45, p < .000).

These results indicated that although the CC-A and CC-KA subscales appeared to be unrelated to social desirability, the CC-EE, CC-MC and CC-S subscales were to some degree related to a tendency to respond in a socially desirable manner.

In Step 4, the known-groups technique was carried out to provide further evidence of construct validity. Mean subscale scores (range 1–5) of the CCCHP items were calculated with higher scores indicating higher cultural competence, whereas a higher score on the SD subscale indicating a higher tendency to provide socially desirable responses. Subscale mean scores of respondents having theoretically expected higher cultural competence were in over 65% of the cases statistically significantly higher than those of respondents with theoretically expected lower cultural competence (p < .05) (Table 5). Respondents with a migration background had on three of the five cc subscales statistically significantly higher mean scores than respondents without a migration background. The mean scores of respondents with frequent cross-cultural encounters through family, neighbourhood, friends or study/work were in two of the five cc subscales significantly lower than of respondents with little cross-cultural encounters in these areas. As hypothesized, mean scores of respondents who reported receiving previous cross-cultural competence training were significantly higher on four of the five cc subscales than those who reported no training.

To summarize, for the CC-MC subscale all group comparisons were statistically significantly different, whereas for the CC-EE and CC-A subscales only two of the three comparisons were statistically significantly different. In contrast, only one group comparison in the CC-S and CC-KA subscales was statistically significantly different. Apparently, these two subscales have less discriminatory power for group differences.

Discussion

One of the major challenges in current research on cultural competence is the limited amount of validated conceptual models and assessment instruments suitable for the German healthcare context. The CCCHP developed for the assessment of cultural competence in HCPs, was shown to have good content and construct validity and modest reliability.

Content validity is of utmost importance because it ensures congruence between the research objective and the data gathering instrument [37, 65]. The evidence supporting the content validity of the CCCHP was based, firstly, on a literature review of cultural competence models and assessment instruments and, secondly, on an expert survey and interviews with HCPs. Sixteen core components of cultural competence grouped into a five-dimensional model: (attitudes, knowledge, awareness/self-reflection, motivation/emotion, and skills) were identified.

Attitudes, knowledge and skills are also part of the tripartite model proposed by Sue et al. [18] and are emphasised in most assessment instruments [32]. Cultural desire and awareness have been acknowledged by Campinha-Bacote [12] and others [10, 66] as necessary for the assessment of cultural competence. However, emotions and self-reflection as part of the CCCHP’s conceptual model appear to tap relatively unique dimensions of cultural competence. HCPs are encouraged to explore their own thoughts and emotions and how these influence their work with patients. Collins and Pieterse [67] highlighted unmasking the subconscious, namely, automatic responses that guide and frame individual’s feelings and cultural interactions as a key ingredient of multicultural competency. Furthermore, active engagement in self-reflective processes was suggested to have great potential to increase counsellors’ multicultural competence [68]. Accordingly, the conceptualisation of the CCCHP extends the three-dimensional model underlying the most frequently cited instruments.

Additional strengths of this study include the explicit incorporation of perspectives of HCPs and experts from the beginning of the development process as well as the use of various sources to generate items for inclusion in the initial item pool [69]. This approach ensured that the CCCHP covered aspects, which are considered important to professionals’ everyday practice, namely to provide appropriate care to diverse patient populations. The need to progress from a theorists’ conceptualisation of cultural competence to a more practice-orientated conceptualisation was also highlighted in the literature [70]. Several phases of item reduction ensured that only the most relevant items were included in the final instrument. The relevance of feedback obtained from HCPs and experts in formulating and selecting the content of items can be illustrated by the final set of items. While 81% of the initial item pool came from existing instruments, the 59-item version contained only 5 items, while in the final 27-item version, no item originated from existing instruments.

Construct validity is the extent to which an instrument measures the theoretical construct it is intended to measure [71]. Construct validity of the CCCHP -59 was assessed using PCA and the known-groups technique. PCA on the data set led to a 32-item (CCCHP -27 plus 5-item SD subscale) six-component solution with 50.24% of total variance explained.

The results of the PCA indicate that the CCCHP is a multidimensional instrument composed of five dimensions of HCPs’ cultural competence: Cross-Cultural Motivation/Curiosity, Cross-Cultural Attitudes, Cross-Cultural Skills, Cross-Cultural Knowledge/Awareness and Cross-Cultural Emotions/Empathy. Hence, the CCCHP provides an extension of the predominant conceptualisation of cultural competence.

‘Cross-Cultural Motivation/Curiosity’ was a clear and strong component as indicated by the moderate or high loadings of items on this component. At present, few existing measures of cultural competence include this relevant component as an independent dimension [72].

The second component, ‘Cross-Cultural Attitudes’ is in line with Mason [73] and Ponterotto et al.’s [74] endeavour to assess attitudes towards ethnical diversity and multiculturalism. Furthermore, Sue and Zane [75] pointed out that differences in cultural attitudes and beliefs between the professional and patient affect the process and efficacy of the therapy.

The third component ‘Cross-Cultural Skills’ relates to the HCPs’ ability to work with culturally diverse patients and also receives emphasis in the existing literature [14, 26, 29].

The ‘Cross-Cultural Emotions/Empathy’ component is supported by Gerrish, Husband and Mackenzie [76] who recognized emotions, such as ‘an emotional openness to others’ or ‘inner-directed emotions’ as the affective dimension of intercultural communicative competence.

The knowledge and awareness dimensions were originally hypothesized to yield different subscales. However, ‘Cross-Cultural Knowledge/Awareness’ appears to focus on two interrelated areas of HCPs’ cultural competence. This finding is consistent with previous research that suggested knowledge and awareness as inseparable cognitive components of cultural competence [77]. ‘CC-KA’ is consistent with the literature that focuses on professionals’ self-awareness and knowledge about cultural diversity and its implications for practice [14, 26].

A ‘Social Desirability’ subscale was identified. In order to check for the potential influence of social desirability, a number of researchers suggest including scales that are context specific and can be used to statistically control for this bias [19, 39]. One item ‘I do not differentiate between the patients …’, which was originally hypothesised to reflect cross-cultural attitudes, loaded on this component. This item may however reflect dependence on the acceptance, and approval of others.

With regard to construct validity, the domains defined a priori in the conceptual model were similar to those indicated by the PCA. This similarity reinforces the validity of the chosen procedure.

To support construct validity, the known-groups technique was applied to test the CCCHP’s ability to discriminate between the predefined groups [41]. The discriminating power of the CCCHP was supported by statistically significantly mean differences between the three predefined groups (p < 0.05). Similar to findings by Doorenbos et al. [77], respondents who had previously participated in cross-cultural competence training, presented statistically significantly higher mean scores than those who had not had this kind of training. Nevertheless, the results of the known-groups technique need to be interpreted in light of the potential influence of social desirability.

In the current study Cronbach’s α was used to estimate the reliabilities of the total scale and of each dimension. The overall scale performed well, with a Cronbach’s α reliability coefficient of .869. However, reliability coefficients of three (.543, .684, .694) of the respective CCCHP subscales were below .70. DeVellis [56] suggested an α of .70 as acceptable for a new instrument. Specifically, the coefficient value for the ‘Cross-Cultural Knowledge/Awareness’ was substantially below the recommended cut-off of .70. To measure ‘CC-KA’ more reliably, a larger pool of suitable items will have to be developed and tested. Further research with practising HCPs will be necessary to enhance the reliability of these scales.

The results of the preliminary analysis on the psychometric properties of the new instrument provide satisfactory evidence with respect to acceptability and feasibility. The higher non-response rate in the skills dimension (range: 0.5–11.2%) could reflect respondents’ lack of clinical experience. In general, however, the overall small proportion of missing values confirmed the acceptability of the CCCHP.

Mean completion time for the total survey was 16.5 minutes (SD ± 5.5 minutes). Nevertheless, feasibility with regard to completion time will be improved due to the reduced number of items in the final version of the instrument. Positive endorsement rates were highest for the motivation/curiosity and knowledge/awareness subscale. In this respect, more differentiated response options would be useful to improve the CCCHP subscales.

Limitations

This study has several strengths, but also limitations that need to be considered when interpreting the results.

While this study enhances the existing understanding of cultural competence by incorporating aspects considered important by practising HCPs in addition to experts, one limitation of this study is that it does not include the patients’ perspective [78, 79]. Despite a low response rate of HCPs and experts, multiple perspectives could be incorporated to inform the development of the CCCHP’s conceptual model and item pool. Most information derived from HCPs and experts was related to their understanding of cultural competence and its components. Both groups provided suggestions rather than specific items to capture the different dimensions of cultural competence. However, suggestions of HCPs and experts guided the selection of items from existing instruments.

Data were obtained from convenience samples of MS and PiA rather than from randomly selected practising HCPs. Although the study participants represent roughly 1% and 3% of the national MS and PiA population with characteristics fairly representative to the national sample [80, 81], the generalisability of the findings for other HCPs remains uncertain. It would also be important to conduct confirmatory factor analyses (CFA) to determine if the component structures from the former PCA holds true. Furthermore, group differences with regard to demographic characteristics, educational training, and scope of practice need to be considered. Both groups are taught communication skills, whereas cultural competence training is not a firm component of the curricula. Social and emotional skills development receives less attention in the medical curriculum than in psychotherapeutic training. On the other hand, practical skills are less standard in the psychotherapeutic context. Although the questionnaire was developed in a German healthcare setting, it is hypothesised that the CCCHP has the same informative value in other European countries. However, when used in other countries, the instrument should be checked on this, preferably not only by inspecting the psychometric properties but also on validity, e.g. by asking HCPs and experts’ opinion about the informative value.

An inherent limitation of the CCCHP and other self-report measures is the uncertainty to what extent these measures actually reflect professionals’ cultural competencies in practice, rather than professionals’ intentions and self-efficacy [39]. The additional use of independent instruments and qualitative methods (e.g. direct observation, objective structured clinical examination) that evaluate professionals’ cultural competence skills would be valuable.

Convergent and discriminant validity were not assessed in the course of this study. There was no validated instrument in German available that assessed a concept close to cultural competence in this particular context. Besides, the completion time for the 59-item online survey was 16 minutes and adding another instrument would have possibly reduced the sample size. However, further research is necessary to evaluate the convergent and discriminant validity of the CCCHP.

Cultural competence assessment relies to a great extent on self-report instruments. Self-report instruments are often susceptible to social desirability bias. Sodowsky [19] recommended administering cultural competence measures with a multicultural social desirability scale that has multicultural content and face validity. In the present study, the six SD items were adopted from a reliable and validated instrument (SDS-17) and modified to enhance their suitability for the target sample and context. In the PCA, only four of the six SD items formed the SD scale. These results indicate that not all items may be suitable for the sample. Hence, additional content and construct validity assessment of the modified SD scale is suggested. Moreover, the significant correlations between the priori defined SD scale and the CC-EE subscale, being an important component of cultural competence, warrant greater attention. The semantic content of these items might activate socially desirable responding. In a next step, CFA with the SD scale treated as external criterion is recommended to assess the extent that SD presents a threat to a valid interpretation of the cc subscale scores.

Implications for practice

Given the current evidence of health and healthcare disparities within Europe [1], there is an increasing need for HCPs to provide culturally competent care to diverse patient populations. The CCCHP, after further validation, can assist HCPs to reflect on their behaviour, to reassess their knowledge and to gain a stronger awareness of their own strengths and weaknesses. Furthermore, periodic and systematic self-assessment can encourage professional development and continuous training, and thus contribute to improving the quality of patient care. The value of the CCCHP lies less in its ability to assess predictive performances of culturally competent behaviour, but more in its potential to identify starting points for further cultural competence development.

After responsiveness to change has been established, the CCCHP could serve as an efficient method to assess the effectiveness of training programs that are designed to enhance professionals’ cultural competence.

See S1 File for the original German version and S2 File for the English version of the CCCHP-27.

Conclusions

The present study represents an effort to develop and evaluate an instrument to assess the cultural competence of HCPs. The results of this study suggest that the CCCHP is a multidimensional instrument composed of five dimensions of HCPs’ cultural competence. Further research is needed to confirm these findings in a random sample of practising HCPs.

The currently predominant conceptualisation of cultural competence refers to professionals’ attitudes, knowledge and skills in working with culturally diverse populations [32]. Moreover, the majority of cultural competence models have been developed in the United States. Qureshi and Collazos [21] emphasised that existing models need to be further defined, adapted, and researched for an effective application in the European context, where greater attention needs to be paid to immigration and cultural differences. As the CCCHP was developed in a German context, it can be the foundation to close this gap.

The development of the conceptual model for the CCCHP revealed Cross-Cultural Motivation/Curiosity and Cross-Cultural Emotions/Empathy as important components of cultural competence. With regard to the widespread tripartite model, an extended understanding of how professionals’ affective states may interact with those of the patients in the context of cross-cultural encounters seems helpful. The CCCHP can be used by HCPs themselves, as well as by managers and trainers.

As shown by the results attained in this study, the CCCHP can serve as a tool as well as a foundation for future research, in the area of cultural competence assessment for HCPs.

Supporting Information

S1 Fig. Overview of the study procedures for the development and psychometric evaluation of the CCCHP.

https://doi.org/10.1371/journal.pone.0144049.s001

(PDF)

S1 File. Fragebogen zur Erhebung Interkultureller Kompetenz in der Gesundheitsversorgung (CCCHP-27) [Cross-Cultural Competence instrument for Healthcare Professionals (CCCHP -27)] (original German version).

https://doi.org/10.1371/journal.pone.0144049.s002

(PDF)

S2 File. Cross-Cultural Competence instrument for Healthcare Professionals (CCCHP-27) (English version).

https://doi.org/10.1371/journal.pone.0144049.s003

(PDF)

Acknowledgments

The authors would like to thank R. Risch, D. Odening, C. Hannig, Z. Tarim, G. Suárez-Serrano, B. Tingir, A. Höcker, N. Löbel, L. Kriston, M. Bernhard, S. Wender, and K. Appun for their valuable support and contribution. The authors express their deep appreciation to the 23 experts participating in the expert survey, the 12 HCPs, the 5 expert reviewers, the 13 student reviewers, the 336 PiAs and 409 medical students who have given their time and support for the development of this instrument. Finally, many thanks go to the medical faculties and PiA training institutions whose support and endorsement helped in the favourable response to this study.

Author Contributions

Conceived and designed the experiments: GB MM RAK DD HS. Performed the experiments: GB MM. Analyzed the data: GB AW MM. Contributed reagents/materials/analysis tools: GB MM RAK AW DD HS. Wrote the paper: GB. Critical revisions: RAK MM AW DD HS. Data collection: GB. Interpretation of results: GB AW MM. Reviewed and approved the final manuscript: GB MM RAK AW DD HS.

References

  1. 1. WHO Regional Office for Europe. How health systems can address health inequities linked to migration and ethnicity. Copenhagen: WHO Regional Office for Europe; 2010.
  2. 2. Yeo S. Language barriers and access to care. Annu Rev Nurs Res. 2004;22: 59–73. pmid:15368768
  3. 3. Canto JG, Allison JJ, Kiefe CI, Fincher C, Farmer R, Sekar P, et al. Relation of race and sex to the use of reperfusion therapy in Medicare beneficiaries with acute myocardial infarction. N Engl J Med. 2000;342(15): 1094–1100. pmid:10760310
  4. 4. Abreu JM. Conscious and Nonconscious African American Stereotypes: Impact on First Impression and Diagnostic Ratings by Therapists. J Consult Clin Psychol. 1999;67(3): 387–393. pmid:10369059
  5. 5. Helms JE, Cook DA. Using race and culture in counseling and psychotherapy: Theory and process. 1st ed. Needham Heights, MA: Allyn & Bacon; 1999.
  6. 6. Bennegadi R. Cultural Competence and Training in Mental Health Practice in Europe: Strategies to Implement Competence and Empower Practitioners. International Organization for Migration (IOM). Background Paper. Paris: Teaching and Research Department, Minkowska Centre. 2009. Available at: http://www.migrant-health-europe.org/files/Mental%20Health%20Practice_Background%20Paper(1).pdf. Accessed December 16, 2014.
  7. 7. Beach MC, Price EG, Gary TL, Robinson KA, Gozu A, Palacio A, et al. Cultural Competence: A Systematic Review of Health Care Provider Educational Interventions. Med Care. 2005;43(4): 356–373. pmid:15778639
  8. 8. Gozu A, Beach MC, Price EG, Gary TL, Robinson K, Palacio A, et al. Self-administered instruments to measure cultural competence of health professionals: A systematic review. Teach Learn Med. 2007;19(2): 180–190. pmid:17564547
  9. 9. Shen Z. Cultural Competence Models and Cultural Competence Assessment Instruments in Nursing: A Literature Review. J Transcult Nurs. 2014 May 9.
  10. 10. Deardorff DK. Identification and Assessment of Intercultural Competence as a Student Outcome of Internationalization. J Stud Int Educ. 2006;10(3): 241–266.
  11. 11. Geiger HJ. Racial stereotyping and medicine: the need for cultural competence. CMAJ. 2001;164(12): 1699–1700. pmid:11450212
  12. 12. Campinha-Bacote J. The Process of Cultural Competence in the Delivery of Healthcare Services: A Model of Care. J Transcult Nurs. 2002;13(3): 181–184. pmid:12113146
  13. 13. Doorenbos AZ, Schim SM. Cultural competence in hospice. Am J Hosp Palliat Care. 2004;21(1): 28–32. pmid:14748520
  14. 14. Sue DW, Arredondo P, McDavis RJ. Multicultural Counseling Competencies and Standards: A Call to the Profession. J Couns Dev. 1992;70: 477–486.
  15. 15. Lum D. A framework for cultural competence. In Lum D. (Ed.) Culturally competent practice: A framework for understanding diverse groups and justice issues. 4th ed. Belmont, CA: Brooks/Cole; 2011. pp. 123–135.
  16. 16. Black RM, Wells SA. Culture and Occupation: A Model of Empowerment in Occupational Therapy. Bethesda, MD: American Occupational Therapy Association; 2007.
  17. 17. Stone JH. Culture and disability: Providing culturally competent services. Thousand Oaks, CA: Sage Publications; 2005.
  18. 18. Sue DW, Bernier JE, Durran A, Feinberg L, Pedersen P, Smith EJ, et al. Position paper: Cross-cultural counseling competencies. Couns Psychol. 1982;10(2): 45–52.
  19. 19. Sodowsky GR. The Multicultural Counseling Inventory: Psychometric properties and some uses in counseling training. In: Sodowsky GR, Impara JC, editors. Multicultural assessment in counseling and clinical psychology. Lincoln, NE: Buros Institute of Mental Measurements; 1996. pp. 283–324.
  20. 20. Hays DG. Assessing Multicultural Competence in Counselor Trainees: A Review of Instrumentation and Future Directions. J Couns Dev. 2008;86: 95–101.
  21. 21. Qureshi A, Collazos F. Cultural competence in the mental health treatment of immigrant and ethnic minority clients. Diversity in Health & Social Care. 2005;2(4): 307–317.
  22. 22. Jirwe M, Gerrish K, Emami A. The theoretical framework of cultural competence. J Multicult Nurs Health. 2006;12(3): 6–16.
  23. 23. Jirwe M, Gerrish K, Keeney S, Emami A. Identifying the core components of cultural competence: findings from a Delphi study. J Clin Nurs. 2009;18(18): 2622–2634. pmid:19538568
  24. 24. Boyle DP, Springer A. Toward a Cultural Competence Measure for Social Work with Specific Populations. J Ethn Cult Divers Soc Work. 2001;9(3): 53–71.
  25. 25. Geron SM. Cultural Competency: How Is It Measured? Does It Make a Difference? Generations. 2002;26(3): 39–45.
  26. 26. Sodowsky GR, Taffe RC, Gutkin TB, Wise SL. Development of the Multicultural Counseling Inventory: A self-report measure of multicultural competencies. J Couns Psychol. 1994;41(2): 137–148.
  27. 27. LaFromboise TD, Coleman HL, Hernandez A. Development and factor structure of the Cross-Cultural Counseling Inventory-Revised. Prof Psychol Res Pr. 1991;22(5): 380–388.
  28. 28. D'Andrea M, Daniels J, Heck R. Evaluating the Impact of Multicultural Counseling Training. J Couns Dev. 1991;70(1): 143–150.
  29. 29. Kim BSK, Cartwright BY, Asay PA, D'Andrea MJ. A Revision of the Multicultural Awareness, Knowledge, and Skills Survey-Counselor Edition. Meas Eval Couns Dev 2003;36(3): 161–180.
  30. 30. Ponterotto JG, Alexander CM. Assessing the multicultural competence of counselors and clinicians. In: Suzuki LA, Meller PJ, Ponterotto JG, editors. Handbook of multicultural assessment: Clinical, psychological, and educational applications. San Francisco: Jossey-Bass; 1996. pp. 651–672.
  31. 31. Ponterotto JG, Gretchen D, Utsey SO, Rieger BP, Austin R. A revision of the Multicultural Counseling Awareness Scale. J Multicult Couns Devel. 2002;30(3): 153–181.
  32. 32. Constantine MG, Ladany N. New visions for defining and assessing multicultural counseling competence. In: Ponterotto JG, Casas JM, Suzuki LA, Alexander CM, editors. Handbook of multicultural counseling. 2nd ed. Thousand Oaks, CA: Sage; 2001. pp. 482–498.
  33. 33. Dunn TW, Smith TB, Montoya JA. Multicultural competency instrumentation: A review and analysis of reliability generalization. J Couns Dev. 2006;84(4): 471–482.
  34. 34. Clark LA, Watson D. Constructing Validity: Basic Issues in Objective Scale Development. Psychol Assess. 1995;7(3): 309–319.
  35. 35. Lynn MR. Determination and Quantification Of Content Validity. Nurs Res. 1986;35(6): 382–385. pmid:3640358
  36. 36. Worthington RL, Whittaker TA. Scale Development Research: A Content Analysis and Recommendations for Best Practices. Couns Psychol. 2006;34(6): 806–838.
  37. 37. Liu M, Kunaiktikul W, Senaratana W, Tonmukayakul O, Eriksen L. Development of competency inventory for registered nurses in the People's Republic of China: scale development. Int J Nurs Stud. 2007;44(5): 805–813. pmid:16519890
  38. 38. Mayring P. Qualitative Inhaltsanalyse: Grundlagen und Techniken [Qualitative content analysis: Fundamentals and techniques]. 10th ed. Weinheim: Beltz Verlag; 2008.
  39. 39. Constantine MG, Ladany N. Self-Report Multicultural Counseling Competence Scales: Their Relation to Social Desirability Attitudes and Multicultural Case Conceptualization Ability. J Couns Psychol. 2000;47(2): 155–164.
  40. 40. Stöber J. The Social Desirability Scale-17 (SDS-17) Convergent Validity, Discriminant Validity, and Relationship with Age. Eur J Psychol Assess. 2001;17(3): 222–232.
  41. 41. Polit DF, Beck CT. Nursing Research: Principles and Methods 7th ed. Philadelphia, PA: Lippincott Williams & Wilkins; 2004.
  42. 42. Berk RA. Importance of expert judgment in content-related validity evidence. West J Nurs Res. 1990;12(5): 659–671. pmid:2238643
  43. 43. Goodwin LD. Changing conceptions of measurement validity: An update on the new standards. J Nurs Educ. 2002;41(3): 100–106. pmid:11939227
  44. 44. Grant JS, Davis LL. Selection and use of content experts for instrument development. Res Nurs Health. 1997;20(3): 269–274. pmid:9179180
  45. 45. Razum O, Zeeb H, Meesmann U, Schenk L, Bredehorst M, Brzoska P, et al. Migration und Gesundheit [Migration and Health]. Schwerpunktbericht der Gesundheitsberichterstattung des Bundes. Berlin: Robert Koch-Institut, 2008. Available at: http://edoc.rki.de/documents/rki_fv/ren4T3cctjHcA/PDF/253bKE5YVJxo_28.pdf. Accessed February 9, 2011.
  46. 46. Ferguson E, Cox T. Exploratory Factor Analysis: A Users’Guide. Int J Select Assess. 1993;1(2): 84–94.
  47. 47. Lamping DL, Schroter S, Marquis P, Marrel A, Duprat-Lomon I, Sagnier P-P. The Community-Acquired Pneumonia Symptom Questionnaire: A New, Patient-Based Outcome Measure To Evaluate Symptoms in Patients With Community-Acquired Pneumonia. Chest. 2002;122(3): 920–929. pmid:12226033
  48. 48. Bartlett MS. A Note on the Multiplying Factors for Various x² Approximations. J R Stat Soc Series B Stat Methodol. 1954;16(2): 296–298.
  49. 49. Kaiser HF. A second generation little jiffy. Psychometrika. 1970;35(4): 401–415.
  50. 50. Kaiser HF. An index of factorial simplicity. Psychometrika. 1974;39(1): 31–36.
  51. 51. Kline P. An Easy Guide to Factor Analysis. London: Routledge; 1994.
  52. 52. Agius RM, Blenkin H, Deary IJ, Zealley HE, Wood RA. Survey of perceived stress and work demands of consultant doctors. Occup Environ Med. 1996;53(4): 217–224. pmid:8664957
  53. 53. Tabachnick BG, Fidell LS. Using Multivariate Statistics. 5th ed. Boston, MA: Allyn and Bacon: Pearson Education Inc; 2007.
  54. 54. Cattell RB. The scree test for the number of factors. Multivariate Behav Res. 1966;1: 245–276.
  55. 55. Fabrigar LR, Wegener DT, MacCallum RC, Strahan EJ. Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychol Methods. 1999;4(3): 272–299.
  56. 56. DeVellis RF. Scale Development: Theory and Applications. 2nd ed. Thousand Oaks, CA: Sage Publications; 2003.
  57. 57. Schenk L, Ellert U, Neuhauser H. Children and adolescents in Germany with a migration background. Methodical aspects in the German Health Interview and Examination Survey for Children and Adolescents (KiGGS) [in German]. Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitsschutz. 2007;50(5–6): 590–599. pmid:17514443
  58. 58. Kline P. A Handbook of Test Construction: Introduction of Psychometric Design. London: Methuen & Co.; 1986.
  59. 59. Nunnally JC. Psychometric Theory. 2nd ed. New York, NY: McGraw-Hill; 1978.
  60. 60. Cattell RB. The Scientific Use of Factor Analysis in the Behavioral and Life Sciences. New York: Plenum Press; 1978.
  61. 61. Gorsuch RL. Factor Analysis. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  62. 62. Kline RB. Principles and practice of structural equation modeling. 3rd ed. New York, NY: Guilford Press; 2011.
  63. 63. Jones MC, Johnston DW. The derivation of a brief Student Nurse Stress Index. Work Stress. 1999;13(2): 162–181.
  64. 64. Harkness JA. Comparative Survey Research: Goals and Challenges. In: de Leeuw ED, Hox JJ, Dillman DA (Ed.). International Handbook of Survey Research Methodology. New York: Lawrence Erlbaum Associates; 2008. p. 56–77.
  65. 65. Burns N, Grove SK. The Practice of Nursing Research: Conduct, Critique, and Utilization. 5th ed. St. Louis, Missouri: Elsevier Saunders; 2005.
  66. 66. Schim SM, Doorenbos A, Benkert R, Miller J. Culturally Congruent Care: Putting the Puzzle Together. J Transcult Nurs. 2007;18(2): 103–110. pmid:17416711
  67. 67. Collins NM, Pieterse AL. Critical Incident Analysis Based Training: An Approach for Developing Active Racial/Cultural Awareness. J Couns Dev. 2007;85(1): 14–23.
  68. 68. Byars-Winston AM, Fouad NA. Metacognition and Multicultural Competence: Expanding the Culturally Appropriate Career Counseling Model. Career Dev Q. 2006;54(3): 187–201.
  69. 69. Priest J, McColl E, Thomas L, Bond S. Developing and refining a new measurement tool. Nurse Res. 1995;2(4): 69–81.
  70. 70. Sue S. Cultural competency: From philosophy to research and practice. J Community Psychol. 2006;34(2): 237–245.
  71. 71. Cronbach LJ, Meehl PE. Construct validity in psychological tests. Psychol Bull. 1955;52(4): 281–302. pmid:13245896
  72. 72. Campinha-Bacote J. The process of cultural competence in the delivery of healthcare services: A culturally competent model of care. Cincinnati, OH: Transcultural C.A.R.E; 2003.
  73. 73. Mason JL. Cultural Competence Self-Assessment Questionnaire: A Manual for Users. Portland: Portland State University, OR. Research and Training Center on Family Support and Children's Mental Health; 1995.
  74. 74. Ponterotto JG, Burkard A, Rieger BP, Grieger I, D'Onofrio A, Dubuisson A, et al. Development and Initial Validation of the Quick Discrimination Index (QDI). Educ Psychol Meas. 1995;55(6): 1016–1031.
  75. 75. Sue S, Zane N. The role of culture and cultural techniques in psychotherapy: A critique and reformulation. Am Psychol. 1987;42(1): 37–45. pmid:3565913
  76. 76. Gerrish K, Husband C, Mackenzie J. Nursing for a Multi-Ethnic Society. Buckingham, MK: Open University Press; 1996.
  77. 77. Doorenbos AZ, Schim SM, Benkert R, Borse NN. Psychometric evaluation of the cultural competence assessment instrument among healthcare providers. Nurs Res. 2005;54(5): 324–331. pmid:16224318
  78. 78. Fuertes J, Bartolomeo M, Nichols CM. Future research directions in the study of counselor multicultural competency. J Multicult Couns Devel. 2001;29(1): 3–12.
  79. 79. Pope-Davis DB, Liu WM, Toporek RL, Brittan-Powell CS. What's missing from multicultural competency research: Review, introspection, and recommendations. Cultur Divers Ethnic Minor Psychol. 2001;7(2): 121–138. pmid:11381815
  80. 80. Statistisches Bundesamt. Bildung und Kultur. Studierende an Hochschulen. Fachserie 11 Reihe 4.1, Wiesbaden: Statistisches Bundesamt (Destatis); 2010. 28. September 2010, korrigiert am 11. Oktober 2010.
  81. 81. Strauß B, Barnow S, Brähler E, Fegert J, Fliegel S, Freyberger HJ, et al. Forschungsgutachten zur Ausbildung von Psychologischen Psychotherapeuten und Kinder- und Jugendpsychotherapeuten. Im Auftrag des Bundesministeriums für Gesundheit. Jena: Universitätsklinikum Jena; April 2009.