Back to Journals » Advances in Medical Education and Practice » Volume 11

Clinical Learning Evaluation Questionnaire: A Confirmatory Factor Analysis

Authors Alnaami N, Al Haqwi A, Masuadi E 

Received 28 July 2020

Accepted for publication 22 November 2020

Published 8 December 2020 Volume 2020:11 Pages 953—961

DOI https://doi.org/10.2147/AMEP.S243614

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 2

Editor who approved publication: Dr Md Anwarul Azim Majumder



Nuha Alnaami,1 Ali Al Haqwi,2 Emad Masuadi2

1Department of Clinical Skills, Alfaisal University, Riyadh, Middle Region, Kingdom of Saudi Arabia; 2Department of Medical Education, King Saud Bin Abdulaziz University for Health Sciences, Riyadh, Middle Region, Kingdom of Saudi Arabia

Correspondence: Nuha Alnaami
Alfaisal University, P.O. Box 50927, Riyadh 11533, Kingdom of Saudi Arabia
Tel +966 11 215 7777
Fax +966 11 215 7611
Email [email protected]

Introduction: The Clinical Learning Evaluation Questionnaire (CLEQ) is a multidimensional, reliable instrument designed to measure the effectiveness of the clinical learning environment for undergraduate medical students. This study seeks to measure and examine the underlying construct along with the latent variables by conducting a confirmatory factor analysis, using structural equation modeling (SEM) so that the instrument can be utilized as an evaluation tool for the continuous improvement of educational environments and curricula.
Methods: A cross-sectional study was carried out on 185 third- and fourth-year medical students. A confirmatory factor analysis was conducted, beginning with principal component analysis for standardized factor loadings, using varimax rotation in SPSS to explore the underlying construct of items. The constructs to which each item was tied were determined, and then the data were run through AMOS to assess construct validity through item reduction based on the modification indices, and estimates were made of the standardized residual covariance of each item in order to determine the best model fit.
Results: A total of 185 students completed the CLEQ Inventory. The original six-factor structure of the CLEQ did not achieve model fit (X2=1587.475, RMSEA=0.092, RMR=0.146, GFI=0.651, AGFI=0.601, CFI=0.728, NFI=0.626). However, the suggested four-factor model of CLEQ displayed good model fit with the improvement of values (X2=86.184, RMSEA=0.052, RMR=0.062, GFI=0.903, AGFI=0.865, CFI=0.951, NFI=0.871). Internal consistency analysis showed that Cronbach’s alpha values of the original six-factor model ranged from 0.68 to 0.88, while four-factor model ranged from 0.72 to 0.87.
Conclusion: This study did not support the proposed six-factor structure of the CLEQ tool. However, the four-factor CLEQ structure demonstrated an adequate degree of good fit and was found to be as reliable as the original structure. Further research on the predictive validity of CLEQ is required as well as a comparison of the psychometric properties across different institutions and countries.

Keywords: principal component analysis, varimax rotation, construct validity, clinical education, undergraduate medical students

Introduction

The clinical learning environment is the ideal educational climate offering numerous opportunities for students to learn and become competent healthcare providers.1 Learning in the clinical environment allows students to develop the knowledge they acquire in lectures into abilities and attitudes that pave the way for the achievement of clinical competence. The clinical training program begins during the clinical clerkship, in which medical students achieve competency-based learning objectives at different stages of the course. Undergraduate medical students have the opportunity to become proficient in their clinical training by practicing practical procedures, communication skills, professional skills, patient investigation and management, data interpretation, professional attitudes, and ethics.2,3

Training medical students in direct patient care is fundamental to the integration of their newly acquired skills with their existing knowledge from textbooks. However, since the 1960s, clinical-based education has exhibited many shortcomings, ranging from the inadequate supervision of learners, to insufficient time for discussion and feedback, a lack of sufficient clinical cases, and insufficiently structured clinical rotations.2–5 Overcoming such challenges is essential for the delivery of high-quality education and good patient outcomes.6,7 Indeed, the evaluation of the clinical learning environment is a crucial component for determining the quality of the clinical education and curriculum delivered. Performing such evaluations requires valid and reliable instruments.8,9

Many valuable tools are used to evaluate learning environments, including the most commonly used Dundee Ready Educational Environment Measure (DREEM), which is used by many educators across organizations and educational settings to appraise their institutions’ educational environments.10 The second most popular tool is the Postgraduate Hospital Educational Environment Measure (PHEEM), the purpose of which, is to investigate the different areas of the clinical learning environments of junior doctors.11 Third, the Clinical Learning Environment Inventory (CLEI) measures students’ perceptions of the factors related to the psycho-social aspects of learning in the clinical environment, the academic atmosphere, and facilities and their effects on the learning process.12 Finally, the Clinical Learning Evaluation Questionnaire (CLEQ) was developed for evaluating the clinical learning environment from the perspective of undergraduate medical students. This instrument was constructed as a response to the shortcomings of the existing instruments.13

CLEQ is a tool designed to measure the effectiveness of the clinical learning environment for undergraduate medical students. In contrast, the other tools focused on exploring the educational climate in general.13 The CLEQ instrument was developed based on the factors that contribute to effective clinical learning and was constructed with consideration of the findings of a qualitative study that took place at King Saud bin Abdulaziz University for Health Sciences (KSAU-HS) Riyadh, KSA. The study investigated effective clinical learning factors from the perspective of medical students and teachers.8 The initial CLEQ was intended to measure the following five factors that emerged from the study and that are often discussed in literature: (1) the diversity of clinical cases, (2) authenticity of the clinical practice, (3) the quality of direct observation and supervision, (4) the organization of the clinical sessions, and (5) the motivation to learn. Measuring the effectiveness of these five areas may positively influence students’ clinical education, which was the first aim of CLEQ.13 During the exploratory factor analysis, a sixth factor was revealed: (6) self-awareness, which refers to knowing one’s own strengths and weaknesses. It was found that this factor supported the preceding five factors and was a better fit for the data.8 Only 3 items had to be deleted due to their poor construction, after which a total of 37 items remained on the questionnaire.13

For self-developed questionnaires, verifying the items for construct validity is important, particularly when the questionnaire has not been used in past studies. This study sought to measure and examine the underlying construct along with the latent variables by conducting a confirmatory factor analysis using structural equation modeling (SEM).14 This validation method is required if the tool is to be used nationally and internationally for the purpose of assessing clinical learning environments.15

Confirmatory research or confirmatory factor analysis is used to test whether measures of a construct or a factor are consistent with the researcher’s understanding of the nature of that factor. It also verifies the number of underlying variables of the instrument (factors) and explores the item–factor relationship, in this case referred to as factor loadings.15

Researchers need measures with exceptional reliability and validity to use across different populations. Developing robust measures is a demanding and time-consuming process, and confirmatory factor analysis takes the existing measures one step further and also facilitates related research findings when the same measure is applied to a greater number of studies.16 Confirmatory research can be used for various reasons, including the development of new measuring instruments, evaluation of the psychometric properties of new measures, construct validation, and the examination of instrument effects.15,16

The present study had the following objectives:

  • To examine the psychometric quality of CLEQ in a sample of Saudi undergraduate medical students.
  • To retest the significance of specific item loadings.
  • To explore construct validity, as recommended by the original study.

Methods

A cross-sectional study was conducted using convenience sampling; Saudi medical undergraduates from three medical schools in their third or fourth year, ranging in age from 21 to 23 years, were invited to participate in this study. The sample size was estimated through the number of items in the questionnaire, and five samples per item were considered acceptable for the principal component analysis and confirmatory factor analysis to obtain model fit and sample size (N=185).17

Data were collected by distributing the questionnaires in the clinical areas. Informed consent was obtained from the deans of the three medical colleges before the questionnaires were distributed. Informed consent was obtained from the respondents who volunteered to respond to the questionnaire prior to its administration. Questionnaires were immediately returned upon completion. Data were analyzed using the Statistical Package for Social Sciences (SPSS v.22) and Analysis of Moment Structure (AMOS v.21). The CLEQ was developed as a tool to measure the clinical climate during the students’ clerkship and was confirmed as a valid and reliable tool after the exploratory factor analysis study.13

The CLEQ tool is based on 37 items measuring six areas of effective clinical learning based on the students’ perceptions: (1) Cases, (2) Authenticity of clinical experiences, (3) Supervision, (4) Organization of doctor–patient encounters, (5) Motivation to learn, and (6) Self-awareness. Each item is rated on a 5-point Likert scale.13 As a first step, principal component analysis by Varimax rotation was performed to test the item loadings and when it was best to remain within factors. This step is vital since an exploratory factor analysis must be conducted before the confirmatory factor analysis in order to verify the number of underlying latent variables (factors) and the pattern of observed variable–factor relationships.18,19 For construct validity, confirmatory factor analysis was completed using SEM through AMOS v.21 software to assess the data fit by looking at the fit indices, which were used as guidelines to reduce errors and improve the model-fit. First, chi-square goodness-of-fit (χ2/df) values below 500 were considered acceptable. Next, the absolute indices, which included the goodness-of-fit index (GFI) and adjusted goodness-of-fit index (AGFI), values should measure 0.9 or above to indicate model fit. In addition, the Residuals-Based Indices, namely Root mean square error of approximation (RMSEA) values, below 0.07, were considered adequate. Third, the values of incremental fit indices, namely the comparative fit index (CFI) and normed fit index (NFI), should measure 0.9 or above. The absolute fit indices determine how well the proposed theory fits the data. The calculation of incremental fit indices relies on comparisons with a baseline model.15,16

Then, the reliability of the six-model and four-model CLEQ was analyzed using Cronbach’s alpha. Finally, the correlation coefficient was computed, and the Pearson correlation coefficient was used to measure the dependence between variables.

This study was approved by the ethics committee at King Abdullah International Medical Research Center (KAIMRC), King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia.

Results

A total of 185 (100%) female (N=78) and male (N=107) third- and fourth-year students completed the CLEQ.

Exploratory Factor Analysis

First, the principal component analysis was completed with the Varimax rotation method, as in the first exploratory study.13 KMO and Bartlett’s test indicated that the data set was adequate and appropriate for use in EFA (Table 1).

Table 1 KMO and Bartlett’s Test

The analysis was first performed on six components or factors. After interpretation, it was found that item loadings would fit better in a five-factor structure, since none of the items loaded at the sixth factor (self-awareness); item loading was assigned to the factor based on the highest loading score, with a cutoff score of 0.3 or higher. As for the factors, two factors were merged into one factor, factors 5 and 6 became one factor after items from both loaded highly together in one factor and were renamed motivation to learn (Table 2).20

Table 2 Principle Component Analysis of the Items on Five Factors

Descriptions of the Five Factors

Factor 1: Cases (7 items)

The item loading showed that item 16, which belonged to the factor authenticity of clinical experiences, loaded higher with cases. This showed that it was more related to case factor than to authenticity. Items 7 and 8 loaded higher in factor 2; therefore, they were moved there.

Factor 2: Authenticity of clinical experiences (7 items)

Items 7 and 8, from the cases factor loaded higher on this factor. These two items were originally constructed to remain in this factor. They were joined by item 24 from Supervision; this item loaded poorly and was retained in this factor based on its highest loading. Items 13, 14, and 15 from this factor loaded higher on the organization factor. Finally, item 16 from this factor was shifted to Cases.

Factor 3: Supervision (6 items)

Items from this factor loaded together except for items 23 and 24.

Factor 4: Organization of clinical encounters (7 items)

Items 13, 14, and 15 from the authenticity factor loaded higher in this factor. These three statements tied closely with organization factors and are interpreted to remain in this factor. In addition, the item 23 statement was more fitting to remain with the organization factor. Item 25 from this factor loaded with factor 5.

Factor 5: Motivation to learn (9 items)

This factor contains items 25 to 37 and item 25 from factor 4. This factor has the highest number of items, since it was merged with factor 6: self-awareness. The item 25 statement was found to fit better in this factor and complemented the other items of this factor.

Analysis was subsequently performed based on this arrangement of factors and items. Arrangements were not only considered based on the highest loadings but also by considering the theoretical implications for CLEQ.

Confirmatory Factor Analysis

SEM showed that the original six-factor CLEQ consisting of 37 items did not achieve model fit. Furthermore, during confirmatory factor analysis (CFA), many versions of the CLEQ were tested, including the re-arranged five-factor structure. None of the models showed model fit, since all the absolute and incremental indices did not suggest it.20 As a result, the first step was to covary items within factors based on the modification indices. The next step was the reduction of items based on the standardized residual covariance in order to select which items to remove as redundant (those items sharing the same meaning). The results are shown in Table 3. The four-factor model (ie, the shortened CLEQ) consisting of 18 items showed a good degree of model fit, as most of the indices indicated (X2=186.184, RMSEA=0.052, RMR=0.062, GFI=0.903, AGFI=0.865, CFI=0.951, NFI=0.871).21 The standardized factor loadings of items for the original six-factor structure ranged from 0.35 to 0.86, and item loadings for the four-factor structure ranged from 0.44 to 0.82, indicating that items from both structures contributed highly to the constructs being measured (Figure 1).20,21

Table 3 Results of Confirmatory Factor Analysis

Figure 1 Standardized factor loadings for the best fit model of CLEQ.

The best fit was model 4, the four-factor structure after items (16, 24, 8, 25, 31, 11, 18, 28, 29, 9, 10, 12, 7, 28, 5, 30, 13, 2, 19, 34) were removed (Table 4).

Table 4 Modification Indices Until Model Fit Was Achieved

Reliability analysis showed the four-factor model to be as reliable as the original six-factor model CLEQ (Cronbach alpha values of the original six-factor model ranged from 0.68 to 0.88, while model four ranged between 0.72 and 0.87).

Finally, correlations were computed between the four factors (Table 5). The table shows that all correlations are significant at the 0.01 level.21 The exploratory factor analysis in the original study showed a positive correlation between motivation and self-awareness, which became one factor in this study.13

Table 5 Correlation Matrix

Discussion

This study was based on a previous study that focused on factor structure, validity, and reliability of the newly constructed CLEQ.13 The 37 items of the CLEQ were based on the previous study of the development of a clinical learning evaluation questionnaire for undergraduate clinical education. These items were placed according to six factors: cases, authenticity of the clinical learning experience, supervision, organization of the doctor–patient encounter, motivation to learn, and self-awareness. The first aim of this study was to investigate the reliability and validity of the factors on a different sample of students.13 The study was conducted with 185 undergraduate students from three medical schools in Riyadh, Saudi Arabia. During the analysis, the arrangement of items within factors differed from the previous study. Therefore, after establishing a five-factor structure of CLEQ, the items of this instrument were attributed to the factors on which they had the highest loading, and none of the items loaded in the sixth factor.22

The data from this study did not support the original six-factor structure consisting of 37 items or the proposed five-factor structure with 37 items measuring the clinical learning climate. The findings seem to be consistent with the previous study, which showed that construct validity was not well supported.13

The study then attempted to propose several versions of CLEQ that could meet the requirement of model fit. Even though it was not recommended by the previous study, it was found that the six-factor structure of the 37-item CLEQ (ie, the original CLEQ) failed to demonstrate model fit. This may suggest that CLEQ measures multiple constructs.20,21 The six-factor structure (see Supplementary Material 1) could not achieve model fit, suggesting the redundancy of a few items. The data indicated that the four-factor structure (shown in Supplementary Material 2) with 18 items, demonstrated model fit, since all fit indices and chi-square values were significant.19 Factor 2, Authenticity of Clinical Experience was removed because it poorly represented the construct being measured, and according to the findings of this study, the authentic clinical learning experience relied on the organization of patients encounters from the perspective of medical students.9 There is a significant correlation between organization and supervision factors, which indicates that those factors are closely related to the quality of the clinical learning experience from the perspective of medical students.23

The removal of 19 items during confirmatory factor analysis (CFA) may dramatically change the underlying factor structure of CLEQ, considering that they might hold useful and meaningful constructs in the clinical learning environment. However, these findings also suggest that there are repetitions of similar items assessing similar constructs that compromise the construct validity of the CLEQ.22

Most of the standardized correlation values of the four factors were more than 0.5, indicating overlap and indifference between constructs. This suggests that either the items need to be restructured to fit the proposed structure, or the model itself needs to be reconsidered and revised.18 Certainly, the four-factor model upon which it is built may need to be completely revised. These issues need to be addressed if CLEQ to be used. This might then be followed by a large-scale international sample being subjected to structural equation modeling analysis. Perhaps there are more constructs being measured by CLEQ, as was noted in the previous study.13,22

Considering all the adjustments, several assets have the potential to verify the authenticity of the data in this study. First, the sample was selected from students across different years of study that may represent students from different stages of the three medical schools. Second, the sample size was calculated based on the recommended ratio of subjects per item. Third, multiple alternative models were tested in this study to reach an ideal model fit. All modifications to the measurement models in this study were disclosed.24

Limitations

The findings of this study are based on the context of Saudi medical colleges. There is a small chance that language factors contributed to the potential shortcomings of this research, since the items in CLEQ were constructed in straightforward, comprehensible English sentences, there may have been a rough percentage of the students who are not proficient in English. This study was conducted in three medical colleges that were conveniently to sample, and may not represent medical students in Riyadh or the rest of the country.

Recommendations

Future research should compare the psychometric credentials of CLEQ in medical schools around the country and internationally. Further, other dimensions of validity should be explored, such as predictive validity, in order to obtain validity evidence on relationships between the survey scores and other variables.

Acknowledgments

The completion of this study could not have been possible without the participation and assistance of respected instructors, supervisors, and reviewers. We would like to especially thank all the medical students who took the time to respond to the questionnaires. Their contributions are sincerely appreciated and gratefully acknowledged.

Disclosure

The authors report no conflicts of interest in this work.

References

1. Nordquist J, Hall J, Caverzagie K, et al. The clinical learning environment. Med Teach. 2019;41(4):366–372. doi:10.1080/0142159X.2019.1566601

2. Dent J, Harden R, Hunt D, eds. A Practical Guide for Medical Teachers. 5th ed. Dundee, UK: Elsevier Health Sciences; 2017.

3. Sullivan N, Swoboda SM, Breymier T, et al. Emerging evidence toward a 2: 1 clinical to simulation ratio: a study comparing the traditional clinical and simulation settings. Clin Simul Nurs. 2019;30:34–41. doi:10.1016/j.ecns.2019.03.003

4. Ramani S, Leinster S. AMEE guide no. 34: teaching in the clinical environment. Med Teach. 2008;30(4):347–364. doi:10.1080/01421590802061613

5. AlHaqwi A, Van der Molen HT. Achieving clinical competence. Saudi Med J. 2010;31(4):357–358.

6. Spencer J. ABC of learning and teaching in medicine: learning and teaching in the clinical environment. BMJ. 2003;326(7389):591–594. doi:10.1136/bmj.326.7389.591

7. Baxley E. The clinic is the curriculum: can attention to the clinical learning environment enhance improvement in health care delivery and outcomes? J Am Board Fam Med. 2020;33(Supplement):S46–S49. doi:10.3122/jabfm.2020.S1.190447

8. AlHaqwi A, Van der Molen HT, Schmidt HG, Magzoub ME. Determinants of effective clinical learning: a student and teacher perspective in Saudi Arabia. Educ Health. 2010;23(2):1–14.

9. Pololi L, Price J. Validation and use of an instrument to measure the learning environment as perceived by medical students. Teach Learn Med. 2000;12(4):201–207. doi:10.1207/S15328015TLM1204_7

10. Roff S, McAleer S, Harden RM, et al. Development and validation of the Dundee Ready Education Environment Measure (DREEM). Med Teach. 1997;19(4):295–299. doi:10.3109/01421599709034208

11. Roff S, McAleer S, Skinner A. Development and validation of an instrument to measure the postgraduate clinical learning and teaching educational environment for hospital-based junior doctors in the UK. Med Teach. 2005;27(4):326–331. doi:10.1080/01421590500150874

12. Newton JM, Jolly BC, Ockerby CM, Cross WM. Clinical learning environment inventory: factor analysis. J Adv Nurs. 2010;66(6):1371–1381. doi:10.1111/j.1365-2648.2010.05303.x

13. AlHaqwi AI, Kuntze J, Van der Molen H. Development of the clinical learning evaluation questionnaire for undergraduate clinical education: factor structure, validity, and reliability study. BMC Med Educ. 2014;14:44. doi:10.1186/1472-6920-14-44

14. Agarwal NK. Verifying survey items for construct validity: a two-stage sorting procedure for questionnaire design in information behavior research. Proc Am Soc Inf Sci Technol. 2011;48(1):1–8. doi:10.1002/meet.2011.14504801166

15. Brown TT. Confirmatory Factor Analysis for Applied Research. New York: Guilford Publications; 2015.

16. Harrington D. Confirmatory Factor Analysis. Oxford: Oxford University Press; 2009.

17. Osborne JW, Costello AB. Sample size and subject to item ratio in principal components analysis. Pract Assess Res Eval. 2004;9(1):11.

18. Lewis TF. Evidence regarding the internal structure: confirmatory factor analysis. Meas Eval Couns Dev. 2017;50(4):239–247. doi:10.1080/07481756.2017.1336929

19. Ferrando PJ, Lorenzo-Seva U. Assessing the quality and appropriateness of factor solutions and factor score estimates in exploratory item factor analysis. Educ Psychol Meas. 2019;78(5):762–780. doi:10.1177/0013164417719308

20. Hooper D, Coughlan J, Mullen MR. Structural equation modelling: guidelines for determining model fit. Electron J Bus Res Methods. 2007;6(1):53–60.

21. Barrett P. Structural equation modelling: adjudging model fit. Pers Individ Dif. 2007;42(5):815–824. doi:10.1016/j.paid.2006.09.018

22. Thompson B, Daniel LG. Factor analytic evidence for the construct validity of scores: a historical overview and some guidelines. Educ Psychol Meas. 1996;56(2):197–208. doi:10.1177/0013164496056002001

23. Gogtay NJ, Thatte UM. Principles of correlation analysis. J Assoc Physicians India. 2017;65(3):78–81.

24. Crede M, Harms P. Questionable research practices when using confirmatory factor analysis. J Manag Psychol. 2019;34(1):18–30. doi:10.1108/JMP-06-2018-0272

Creative Commons License © 2020 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.