Back to Journals » Patient Related Outcome Measures » Volume 9

The Relational Aspects of Care Questionnaire: item reduction and scoring using inpatient and accident and emergency data in England

Authors Kelly L, Sizmur S , Käsbauer S , King J , Cooper R , Jenkinson C , Graham C 

Received 16 November 2017

Accepted for publication 28 March 2018

Published 19 June 2018 Volume 2018:9 Pages 173—181

DOI https://doi.org/10.2147/PROM.S157213

Checked for plagiarism Yes

Review by Single anonymous peer review

Peer reviewer comments 3

Editor who approved publication: Dr Robert Howland



Laura Kelly,1 Steve Sizmur,2 Susanne Käsbauer,2 Jenny King,2 Robyn Cooper,2 Crispin Jenkinson,1 Chris Graham2

1Health Services Research Unit, Nuffield Department of Population Health, University of Oxford, Oxford, UK; 2Research Division, Picker Institute Europe, Oxford, UK

Purpose: The Relational Aspects of Care Questionnaire (RAC-Q) is an electronic instrument which has been developed to assess staff’s interactions with patients when delivering relational care to inpatients and those accessing accident and emergency (A&E) services. The aim of this study was to reduce the number of questionnaire items and explore scoring methods for “not applicable” response options.
Patients and methods: Participants (n=3928) were inpatients or A&E attendees across six participating hospital trusts in England during 2015–2016. The instrument, consisting of 20 questionnaire items, was administered by trained hospital volunteers over a period of 10 months. Items were subjected to exploratory factor analysis to confirm unidimensionality, and the number of items was reduced using a range of a priori psychometric criteria. Two alternative approaches to scoring were undertaken, one treated “not applicable” responses as missing data, while the second adopted a problem score approach where “not applicable” was considered “no problem with care.”
Results: Two short-form RAC-Qs with alternative scoring options were identified. The first (the RAC-Q-12) contained 12 items, while the second scoring option (the RAC-Q-14) contained 14 items. Scores from both short forms correlated highly with the full 20-item parent form score (RAC-Q-12, r=0.93 and RAC-Q-14, r=0.92), displayed high internal consistency (Cronbach’s α: RAC-Q-12=0.92 and RAC-Q-14=0.89) and had high levels of agreement (intraclass correlation coefficient [ICC]=0.97 for both scales).
Conclusion: The RAC-Q is designed to offer near-real-time feedback on staff’s interactions with patients when delivering relational care. The new short-form RAC-Qs and their respective method of scoring are reflective of scores derived using the full 20-item parent form. The new short-form RAC-Qs may be incorporated into inpatient surveys to enable the comparison of ward or hospital performance. Using either the RAC-Q-12 or the RAC-Q-14 offers a method to reduce missing data and response fatigue.

Keywords: real-time feedback, patient experience, surveys, hospital care, emergency care

Introduction

Person-centered care is considered to be a key component of high-quality health care in the UK.14 Achieving person-centered care, however, can be challenging as it concerns not only how services are delivered but also relationships between health care professionals and patients who have varying levels of dependency.5,6 It is important that experiences of care received are monitored to provide insights into how care delivery can be improved. A number of patient experience measures are currently in use in the National Health Service (NHS). These measures are predominantly focused on measuring transactional or “functional” aspects of care, such as cleanliness, waiting times and pain management.710 While functional aspects of care are important, they do not reflect “relational” aspects of care which focus on interactions between health care professionals and patients.11 Fostering a relationship-centered approach to care, particularly in relation to older patients, is believed to be a key in providing positive experiences of care and can contribute to the patients’ emotional comfort.1214 “Relational” care concerns interpersonal aspects of care, such as communication, providing the space for patients to discuss concerns or fears and treating patients with respect and dignity.11,15

Despite the importance of relational care, a recent independent inquiry16 into the care provided by the UK Mid Staffordshire NHS Foundation Trust identified deficiencies in its delivery. A key recommendation of the report included the need for a suitable instrument to assess relational aspects of care, in particular, for use among older inpatients and those accessing accident and emergency (A&E) services. In addition, the inquiry highlighted the importance of real-time data collection as a key mechanism with which to monitor relational care in a timely and efficient manner. There is currently no electronic instrument with which to measure staff’s interactions with older people or those attending A&E services with regard to relational care. With these considerations in mind, University of Oxford (Oxford, UK) and Picker Institute Europe (Oxford, UK) developed an instrument, the Relational Aspects of Care Questionnaire (RAC-Q),17 for use within this context.

The RAC-Q was designed for administration on a tablet computer to allow responses to be fed back to participating wards in “near real time.” In brief, questionnaire items were constructed following a review of relevant literature, a focus group and eight interviews carried out with recent inpatients and A&E attendees. Eight overarching themes that reflected staff’s interactions with patients when delivering relational care were identified, and the existing questionnaire items were sourced to reflect each theme. Questionnaire items, representing likely manifestations of staff’s interaction style when delivering relational care, were selected in the belief that positive patient responses would indicate the delivery of good relational care.18,19 Existing items were sourced from the 2013 National NHS Inpatient and Emergency Department surveys while additional items were developed, where no existing items were deemed suitable. A total of 62 items were reviewed by an expert advisory group (n=5) consisting of public and patient representatives, hospital patient experience representatives and academics, who reduced the 62 identified items to 22 items. Finally, cognitive interviews (n=30) were conducted with current inpatients and A&E patients at three hospitals in England to identify duplicate items and ensure participants’ understanding. A detailed account of the instrument’s development is outlined elsewhere.17 During testing, the questionnaire was administered via a tablet computer, ensuring its suitability for use in the inpatient context. The process resulted in 20 confirmed items which were regarded as relevant and acceptable when measuring staff’s interactions with regard to delivering relational care. Item responses were coded as 0–100, where 0=worst care and 100=best care. Nine items also included additional response options to indicate that the question was not applicable to them. The 20-item questionnaire was subsequently administered during a 10-month pilot study across six trusts in selected wards which provided care primarily to people aged 75 years and older and among those attending A&E.

A detailed evaluation of the barriers and facilitators to the implementation of RAC-Q is outlined elsewhere.20 While many barriers to collecting real-time data can relate to technology resources and staff engagement, reducing the number of questionnaire items to be administered is one way to reduce burden of questionnaire completion. Fewer questionnaire items are particularly welcome given that older people and A&E attendees can present unique challenges when completing lengthy questionnaires. For example, many are likely to be living with conditions affecting hearing, speech, vision and cognitive processing, and those attending A&E services may be in acute pain, shock and experiencing trauma.39,40 The first aim of the analyses reported in this study aimed to investigate the feasibility of reducing the length of the original 20-item questionnaire through the creation of a short-form RAC-Q. Short-form questionnaires have been developed for a large number of widely used questionnaires and many have been found to be acceptable and informative providing similar results to their original parent form.2123 Fewer items can be advantageous to studies or evaluations where additional items or measures are administered concurrently or where patient burden should be minimized (e.g., with older patients or those who have recently experienced acute pain or shock).24

A second aim of these analyses sought to explore methods of scoring the questionnaire in the cases of missing or nonevaluative data. This included exploring the feasibility of allocating scores to “not applicable” response options to nine relevant items within the instrument so that information was not lost and missing data reduced. Categorizing “not applicable” responses that are typically left unscored can maximize the number of usable responses and may have the potential to improve interpretation of results for health care providers. Two approaches to analyses were taken to realize the aims outlined. The first approach left the “not applicable” responses unscored, a similar approach to methods used in the NHS patient survey program.25 The second approach introduced a “problem score” approach where responses were rescored to indicate the presence or absence of a problem with staff’s interaction styles in the context of relation care. Problem score approaches have been used successfully in the past26,27 and, in this instance, “not applicable” responses were recoded from “no score” to “100,” indicating “no problem with care” (Table 1).

Table 1 Response option codes for approach 1 and approach 2

Patients and methods

Data source

The purpose of this study was to explore two approaches of item reduction and scoring for the RAC-Q. Study focused on whether reducing the number of items and differences in the scoring of items had any implications for the interpretation of the overall score. Data reported in this study were collected over 10 months in six participating trusts in England during 2015–2016. The instrument, consisting of 20 questionnaire items, was administered by hospital volunteers who had received training on the use of the tablet, the practice of administering questionnaires and training on how to approach patients. Two study sites opted to use a free standing tablet kiosk within their A&E department instead of volunteers due to environmental constraints. All participants provided written informed consent. The East of Scotland Research Ethics Service reviewed this study and provided a favorable opinion in August 2014 (14/ES/1065)

Analysis

Two approaches to item reduction and scoring were applied (Figure 1). Analyses for both approaches were initially restricted to 3889 patient cases (out of a total of 3928 patient cases) which were deemed “useable” as they included cases where at least four valid item responses were recorded. To optimize efficiency and minimize responder burden, analysis aimed to retain only the most meaningful and relevant items. Following preliminary exploratory factor analysis that confirmed the unidimensionality of responses,28 items were subjected to preliminary data checks to confirm their suitability for inclusion using a range of a priori criteria as outlined for each approach. Questionnaire items were excluded in the cases of more than 5% nonresponse (skipped or missing responses). In the first scoring approach (approach 1), items with high numbers of unscored “not applicable” responses which limited the information were excluded. Items with <90% of scored responses were also excluded.

Figure 1 Steps of analysis and item removal.

Abbreviation: NHS, National Health Service.

The second approach to scoring (approach 2) aimed to maximize usable patient data and identify the absence or presence of a problem with the delivery of relational care. Items with “not applicable” categories were recoded as outlined in Table 1, and items were removed if they had exceptionally high (>90%) floor or ceiling effects as they indicated poor variation and provided little information. Items that displayed a high number of poor correlations (<0.3) with other items in the questionnaire were also removed. Poor correlations with a large number of items can indicate that a particular item is not measuring a similar construct to other items in the scale.29,30 Finally, reliability analysis was used to identify items with low item-to-total correlations (<0.3) and items that lowered the Cronbach’s α value.29 Items displaying a high number of poor correlations with other items or items that lowered the Cronbach’s α value were iteratively removed.

Score comparisons

For each item, scored response categories were valued from 0 (worst care) to 100 (best care). Scale scores were obtained by calculating the mean item score for cases with complete scored data only (i.e., where a scored response was obtained for all 20 items). In the case of non-normal distributions, scale scores were also standardized using the Blom approach for normalization.31

Differences between the 20-item survey and the new reduced scales were assessed using a paired-sample t-test. Agreement between scales was measured using the intraclass correlation coefficient (ICC) two-way mixed model for consistency and exact agreement, and for ordinal agreement using Spearman’s ρ.32

Scores for the new reduced questionnaires were compared across patient characteristics (sex, age) and hospital trusts to investigate whether the instrument could detect differences between patient groups. Modes of completion were also compared to ensure that patient responses did not differ depending on the device used to administer the survey.

Results

Characteristics

The average length of time taken to complete the survey, when restricted to questionnaire sessions that started and finished on the same day (n=3908), was 8.5 min with an SD of 9.9. Restricting questionnaire sessions to those which were completed in the same day gave a more accurate account of the average time for survey completion as they reduced the number of submissions that were not uploaded at the time of completion.

All further analyses for item reduction were restricted to responses that were deemed “useable” (those having at least four valid responses to scored items). The sample size (n=3889) included 1687 (43.4%) men and 2044 (52.6%) women with a mean age of 65 years (SD 20.9, median 70 years). Further sample characteristics are summarized in Table 2. Participants predominantly completed the survey on a tablet (n=3590, 92.3%) while the remainder (n=299, 7.7%) completed the survey using a kiosk.

Table 2 Characteristics (for cases with at least four responses)

Approach 1 analysis

Eight items were removed according to the a priori criteria outlined as follows. One item (item 17) was removed due to high numbers of missing data from nonresponse (6.2%). For all other questions, at least 95% of respondents recorded an answer. While the proportion giving a response of “do not know” was generally low (<2% of responses), numbers for possible “not applicable” responses (e.g., “I have not asked for help”) were high. In this reduction approach, non-applicable responses were left unscored. Taking into account the extent of nonevaluative (not applicable) item data, two further items (items 9 and 16) were removed. The 17 retained items still included some items with relatively high proportions of missing data, which presented potential problems in the computation of an overall score; only 1561 cases had complete data on the reduced 17-item set. A further step of item removal was therefore undertaken and five items (items 7, 8, 12, 14 and 15) with <90% response were removed, leaving 12 items for a new short form RAC-Q to be compared against the full RAC-Q. Removed items are summarized in Table 3.

Table 3 Items removed during the study for approaches 1 and 2

Approach 2 analysis

Six items were removed according to the a priori criteria outlined in the “Patients and methods” section. One item (item 17) was removed due to high numbers of missing data (6.2%) and high numbers of poor correlations with other questionnaire items. Two items (items 18 and 20) were removed due to ceiling effects where >90% gave the same answer to items. Following their removal, one item (item 12) was removed as its removal increased the internal consistency of the instrument while also having a high number of poor correlations with other items. Two final items (items 1 and 2) were removed due to high numbers of poor correlation with other items. Removed items are summarized in Table 3.

Scale comparisons

Scale statistics are reported in Table 4 for the cases of complete scored data on both the full 20-item parent form and their respective short forms (approach 1, n=609 and approach 2, n=2967). For both approaches, results showed that agreement and consistency between the parent form and their respective short-form scales were high. The internal consistency for both the 12- and 14-item scales was excellent with item-total correlations ranging from 0.54 to 0.79 and 0.51 to 0.70, respectively. Differences between the parent form and the RAC-Q-12 short form were assessed using a paired samples t-test. The difference was statistically significant in raw score form (t=–11.05, df=608, p<0.0001), but not when both scales were normalized (t=0.37, df=608, p=0.71). Similarly, significant differences were observed between the parent form and the RAC-Q-14 short-form raw scores (t= –19.15, df=2966, p<0.001), but not when both scales were normalized (t=3.00, df=2966, p<0.05). Overall, these results indicate that both the RAC-Q-12 and the RAC-Q-14 short-form scores are reflective of the full 20-item parent form score.

Table 4 Scale descriptive statistics

Notes: *Cases where scored responses were obtained for all RAC-Q and respective short-form items. **Exact agreement.

Abbreviations: ICC, intraclass correlation coefficient; RAC-Q, Relational Aspects of Care Questionnaire.

Approach 2 (RAC-Q-14) resulted in a higher number of retained cases with 3215 respondents providing scored responses to 14 items, while for approach 1 (RAC-Q-12), 3087 respondents provided scored responses for 12 items. Scores displayed a highly skewed distribution toward the top end of the scale (best care). Statistical analyses for the new scales were therefore carried out using nonparametric statistics. Significant differences were found between men and women (men reporting more positive experiences of care than women) for both scales. No differences were found between modes of completion for the RAC-Q-14; however, slight differences were found for the mode of completion with the RAC-Q-12 (Table 5). The reduced indexes were not significantly correlated with age (RAC-Q-12: Spearman’s correlation=0.02, p=0.30, n=3021; RAC-Q-14: Spearman’s correlation=–0.02, p=0.34, n=3140). Tests (Kruskal–Wallis k independent samples) for differences between trusts indicated that there was a significant difference between trusts, demonstrating that the reduced questionnaire is capable of detecting differences between trusts (Table 5).

Table 5 Differences between sex, mode of completion (Mann–Whitney U-test of significance) and hospital trusts (Kruskal–Wallis k-independent samples test)

Notes: Data collection took place in six hospital trusts within England. Sites A-F represent these distinct locations.

Abbreviation: RAC-Q, Relational Aspects of Care Questionnaire.

Discussion

Measuring staff’s interactions with patients is an important way to assess the delivery of relational care. Monitoring these interactions using a patient self-report instrument in near real time may provide the most efficient way in which hospital staff can address any inadequacies of care.16 In addition to monitoring levels of relational care within wards, administering the RAC-Q offers hospital staff the opportunity to compare performance between wards and has the flexibility to be incorporated into existing data collections which may be ongoing within a trust.

This study aimed to address data completion and scoring challenges experienced when administering the full 20-item RAC-Q in a busy hospital setting. Two new short-form RAC-Qs were identified, the RAC-Q-12 and the RAC-Q-14, consisting of 12 and 14 items, respectively. These short-form RAC-Qs require the patient to complete less questions, yet were found to produce very similar results to that of the parent 20-item RAC-Q. Fewer administered items will reduce patient burden, which is particularly welcome in a busy hospital environment or in incidents where patients are in acute pain or shock. The short-form RAC-Qs also provide more scope for relational care to be monitored alongside other measures, for example, indicators of functional care. Similarly to other established short-form questionnaires.2123 analyses confirmed that the short-form RAC-Qs have good psychometric properties with excellent levels of internal consistency and high agreement with the parent form. Therefore, while the full 20-item RAC-Q instrument may offer slightly more precision, the short-form RAC-Qs are recommended where brevity is required. The choice between the two newly developed short-form RAC-Q instruments offer some flexibility in choosing the item content of the questionnaire administered. This is important as, while many studies to date have concluded that using short questionnaires can improve response rates,33,34 evidence also exists indicating that the length of questionnaire does not always impact on response rates or data quality.35 The rationale for administering a questionnaire should therefore always be based on content over length.36

A second aim of this study explored potential scoring options for responses to RAC items which may have otherwise been excluded from analysis due to having response options that were left unscored.

While the RAC-Q-12 has the advantage of fewer items to administer to the patient, large amounts of unscored “not applicable” responses can limit total score interpretations. The RAC-Q-14 scoring structure retains and values “not applicable” responses, minimizing missing or “nonevaluative” data. The simple valuing of the “not applicable” responses simplifies the calculation of the total score for the RAC-Q-14 and means minimum training is needed to calculate and interpret scores. Simple scoring algorithms and the reduction in missing data are particularly advantageous when used in clinical settings. Various pitfalls in applying multiple imputation techniques37 to account for missing responses can be complex and impractical. Alternative techniques for handling missing data, such as “hot decking,” where missing responses are replaced with values obtained from a similar responder (e.g., similar characteristics) also have multiple methods of imputation which can be complex and would benefit from further study to support their application.38

The RAC-Q-12 and RAC-Q-14 are short questionnaires to assess staff’s interactions with patients when delivering relational care. Questions should be applicable to all patients and have been specifically tested for their suitability among older inpatients and A&E attendees. In practice, the short-form RAC-Qs provide a resource for health care providers not only to monitor relational care but also to use information collected to drive improvement in targeted hospital settings. Continuous electronic data collection using real-time feedback can then provide a mechanism with which to evaluate the success of initiatives introduced to address relational care. Evidence collected using these instruments may be of interest to a range of groups including clinical staff, quality improvement teams and board members to provide assurances in the standards of relational care being delivered.

Limitations

While the RAC-Q will provide a valuable means of monitoring staff’s interactions with patients in hospital settings, it is important to note that mean reported scores within the dataset were high. This may have some implications for interpreting scores and for the sensitivity of instruments for detecting continually improving scores. Nonetheless, initial analysis has shown the ability of instruments to detect differences. While results largely indicate good delivery of relational care, differences were detected between the six participating trusts and between scores reported for men and women. There was no difference between questionnaire scores and modes of administration for the RAC-Q-14, going some way to indicate responses do not differ between tablet computer and kiosk completion in terms of their psychometric properties. Slight differences, however, were found for scores and the mode of administration with the RAC-Q-12. This may require further investigation in the future with a larger sample size for kiosk completion. It is also worth noting that the two sites opting to use kiosks for questionnaire completion during the 10-month pilot study stopped using them during data collection due to operational difficulties and poor recruitment uptake. These experiences go some way to suggesting the general unsuitability of standalone kiosks, even in instances where the instrument properties are compatible with tablet computer completion.

Conclusion

The RAC-Q provides a set of questions measuring staff’s interactions with patients when delivering relational care which are applicable across hospital trusts and relevant to all inpatients or those attending A&E. The RAC-Q-12 and RAC-Q-14 have very high associations with the parent questionnaire and can be useful for action planning and policy decisions within a trust or analyzed at individual item level.

Acknowledgments

We thank all volunteers, staff, the study team and advisory group members who worked with us. We also thank the collaborators at the six case study sites. This project was funded by the National Institute for Health Research Health Services and Delivery Research Program (project number 13/07/39). The views and opinions therein are those of the authors and do not necessarily reflect those of the Health Service and Delivery Research Program, National Institute for Health Research (NIHR), NHS or the Department of Health. This article presents independent study funded by the NIHR. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

Disclosure

The authors report no conflicts of interest in this work.

References

1.

Department of Health. The NHS Plan: A Plan for Investment, A Plan for Reform. Norwich: HMSO; 2000.

2.

Darzi A, Department of Health. High Quality Care for All NHS Next Stage Review Final Report. London: Department of Health; 2008.

3.

Department of Health. The NHS Constitution for England. London: Department of Health; 2013.

4.

Berwick D. A Promise to Learn – A Commitment to Act: Improving the Safety of Patients in England. London: Williams Lea; 2013.

5.

The Health Foundation. Person-Centred Care Made Simple. What Everyone Should Know About Person-Centred Care. Health Foundation, London; 2014.

6.

Paparella G. Person-Centred Care in Europe: A Cross-Country Comparison of Health System Performance, Strategies and Structures. Oxford, England: Picker Institute Europe; 2016.

7.

Campbell J, Smith P, Nissen S, Bower P, Elliott M, Roland M. The GP Patient Survey for use in primary care in the National Health Service in the UK – development and psychometric characteristics. BMC Fam Pract. 2009;10(1):57.

8.

Roberts JI, Sauro K, Jette N, et al. Using a standardized assessment tool to measure patient experience on a seizure monitoring unit compared to a general neurology unit. Epilepsy Behav. 2012;24(1):54–58.

9.

Gremigni P, Sommaruga M, Peltenburg M. Validation of the Health Care Communication Questionnaire (HCCQ) to measure outpatients’ experience of communication with hospital staff. Patient Educ Couns. 2008;71(1):57–64.

10.

Davies EA, Madden P, Coupland VH, Griffin M, Richardson A. Comparing breast and lung cancer patients’ experiences at a UK Cancer Centre: implications for improving care and moves towards a person centered model of clinical practice. Eur J Pers Cent Healthc. 2011;1(1):177–189.

11.

Doyle C, Lennox L, Bell D. A systematic review of evidence on the links between patient experience and clinical safety and effectiveness. BMJ Open. 2013;3(1):e001570.

12.

Williams AM, Irurita VF. Therapeutic and non-therapeutic interpersonal interactions: the patient’s perspective. J Clin Nurs. 2004;13(7):806–815.

13.

Williams AM, Kristjanson LJ. Emotional care experienced by hospitalised patients: development and testing of a measurement instrument. J Clin Nurs. 2009;18(7):1069–1077.

14.

Bridges J, Flatley M, Meyer J. Older people’s and relatives’ experiences in acute care settings: systematic review and synthesis of qualitative studies. Int J Nurs Stud. 2010;47(1):89–107.

15.

Coulter A. Engaging Patients in Healthcare. McGraw-Hill Education, Maidenhead; 2011.

16.

Francis R. Report of the Mid Staffordshire NHS Foundation Trust Public Inquiry. London: The Stationery Office; 2013.

17.

Graham C, Käsbauer S, Cooper R, et al. An Evaluation of a Near Real-Time Survey for Improving Patients’ Experiences of the Relational Aspects of Care. Health Services and Delivery Research: NIHR Health Technology Assessment Programme, Southampton; 2017.

18.

Fayers PM, Hand DJ. Factor analysis, causal indicators and quality of life. Qual Life Res. 1997;6(2):139–150.

19.

Fayers PM, Hand DJ. Causal variables, indicator variables and measurement scales: an example from quality of life. J R Stat Soc A Stat Soc. 2002;165(2):233–253.

20.

Käsbauer S, Cooper R, Kelly L, King J. Barriers and facilitators of a near real-time feedback approach for measuring patient experiences of hospital care. Health Policy Technol. 2016;6(1):51–58.

21.

De Bruin AF, Diederiks JPM, De Witte LP, Stevens FCJ, Philipsen H. The development of a short generic version of the sickness impact profile. J Clin Epidemiol. 1994;47(4):407–418.

22.

Ware JE Jr, Sherbourne CD. The MOS 36-item short-form health survey (SF-36). Conceptual framework and item selection. Med Care. 1992;30(6):473–483.

23.

Bohlmeijer E, ten Klooster PM, Fledderus M, Veehof M, Baer R. Psychometric properties of the five facet mindfulness questionnaire in depressed adults and development of a short form. Assessment. 2011;18(3):308–320.

24.

Dillman DA, Smyth JD, Christian LM. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. Hoboken, NJ: Wiley Publishing; 2014.

25.

NHS_Surveys [webpage on the Internet]. NHS Adult Inpatient Survey 2015. 2015. Available from: http://nhssurveys.org/survey/1641. Accessed April 01, 2017.

26.

Jenkinson C, Coulter A, Gyll R, Lindstrom P, Avner L, Hoglund E. Measuring the experiences of health care for patients with musculoskeletal disorders (MSD): development of the Picker MSD questionnaire. Scand J Caring Sci. 2002;16(3):329–333.

27.

Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in-patient surveys in five countries. Int J Qual Health Care. 2002;14(5):353–358.

28.

Fabrigar LR, Wegener DT. Exploratory Factor Analysis. Oxford: Oxford University Press; 2012.

29.

Nunnally J, Bernstein IH. Psychometric Theory. 3rd ed. New York: McGraw-Hill; 1994.

30.

Hinkin TR. A brief tutorial on the development of measures for use in survey questionnaires. Organ Res Meth. 1998;1(1):104–121.

31.

Kraja AT, Corbett J, Ping A, et al. Rheumatoid arthritis, item response theory, Blom transformation, and mixed models. BMC Proc. 2007;1(suppl 1):S116–S116.

32.

McGraw KO, Wong SP. Forming inferences about some intraclass correlation coefficients. Psychol Methods. 1996;1(1):30–46.

33.

Taylor-West P, Saker J, Champion D. The benefits of using reduced item variable scales in marketing segmentation. J Market Commun. 2014;20(6):438–446.

34.

Iglesias C, Torgerson D. Does length of questionnaire matter? A randomised trial of response rates to a mailed questionnaire. J Health Serv Res Policy. 2000;5(4):219–221.

35.

Jenkinson C, Coulter A, Reeves R, Bruster S, Richards N. Properties of the Picker Patient Experience questionnaire in a randomized controlled trial of long versus short form survey instruments. J Public Health Med. 2003;25(3):197–201.

36.

Rolstad S, Adler J, Ryden A. Response burden and questionnaire length: is shorter better? A review and meta-analysis. Value Health. 2011;14(8):1101–1108.

37.

Sterne JAC, White IR, Carlin JB, et al. Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls. BMJ. 2009;338:b2393.

38.

Andridge RR, Little RJA. A review of hot deck imputation for survey non-response. Int Stat Rev. 2010;78(1):40–64.

39.

Murrells T, Robert G, Adams M, Morrow E, Maben J. Measuring relational aspects of hospital care in England with the ‘Patient Evaluation of Emotional Care during Hospitalisation’ (PEECH) survey questionnaire. BMJ Open. 2013;3(1).pii: e002211.

40.

Brostow DP, Hirsch AT, Kurzer MS. Recruiting older patients with peripheral arterial disease: evaluating challenges and strategies. Patient Prefer Adherence. 2015;9:1121–1128.

Creative Commons License © 2018 The Author(s). This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution - Non Commercial (unported, v3.0) License. By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms.