Skip to main content
  • Research article
  • Open access
  • Published:

Quality improvement practices used by teaching versus non-teaching trauma centres: analysis of a multinational survey of adult trauma centres in the United States, Canada, Australia, and New Zealand

Abstract

Background

Although studies have suggested that a relationship exists between hospital teaching status and quality improvement activities, it is unknown whether this relationship exists for trauma centres.

Methods

We surveyed 249 adult trauma centres in the United States, Canada, Australia, and New Zealand (76% response rate) regarding their quality improvement programs. Trauma centres were stratified into two groups (teaching [academic-based or –affiliated] versus non-teaching) and their quality improvement programs were compared.

Results

All participating trauma centres reported using a trauma registry and measuring quality of care. Teaching centres were more likely than non-teaching centres to use indicators whose content evaluated treatment (18% vs. 14%, p < 0.001) as well as the Institute of Medicine aim of timeliness of care (23% vs. 20%, p < 0.001). Non-teaching centres were more likely to use indicators whose content evaluated triage and patient flow (15% vs. 18%, p < 0.001) as well as the Institute of Medicine aim of efficiency of care (25% vs. 30%, p < 0.001). While over 80% of teaching centres used time to laparotomy, pulmonary complications, in hospital mortality, and appropriate admission physician/service as quality indicators, only two of these (in hospital mortality and appropriate admission physician/service) were used by over half of non-teaching trauma centres. The majority of centres reported using morbidity and mortality conferences (96% vs. 97%, p = 0.61) and quality of care audits (94% vs. 88%, p = 0.08) while approximately half used report cards (51% vs. 43%, p = 0.22).

Conclusions

Teaching and non-teaching centres reported being engaged in quality improvement and exhibited largely similar quality improvement activities. However, differences exist in the type and frequency of quality indicators utilized among teaching versus non-teaching trauma centres.

Peer Review reports

Background

Quality improvement programs are an important component of trauma centre and system structure [1], and have been shown to be a valuable administrative tool to strengthen the care of severely injured patients [2]. However, at present, trauma centres appear to conduct quality improvement programs of varying degrees of intensity and sophistication [3]. Despite this heterogeneity, it remains largely unknown whether quality improvement program structure and activities improve overall trauma patient outcomes [4, 5].

Conflicting evidence exists regarding the effect of hospital teaching status on patient care. Although teaching hospitals have higher volumes, which may correlate with improved outcomes [6, 7], this association may be negated by the presence of learners and their relative inexperience. Moreover, while a systematic review reported that hospital teaching status had little effect on patient outcomes, this association may depend on the disease examined [8]. Only one study has examined the association between outcomes of injured patients and teaching status. This study of splenic injury management found that teaching hospitals were more likely to attempt non-operative management, which resulted in increased rates of splenic salvage [9].

Although quality improvement has been used in trauma care for some time, there exists a gap in knowledge whether teaching trauma centres differ in their quality improvement activities relative to non-teaching centres. A recent review suggested that teaching hospitals had superior quality indicator use in terms of process measurement and other non-mortality end points relative to those used by non-teaching centres, but did not focus specifically on trauma care [10]. We recently conducted a large multinational survey specifically designed to assess the quality indicators used by trauma centres [5]. This study reports the results of a re-analysis of this survey to explore the relationship between trauma quality improvement programs and hospital teaching status.

Methods

To examine the quality of care provided to severely injured patients, we developed a conceptual model of quality indicators in trauma care that merges Donabedian’s framework of quality of care with modern systems of trauma care. This was previously described in a scoping review of quality indicators in trauma care [11]. A survey tool was developed based on the results of this scoping review and semi-structured interviews with injury and quality of care experts. The details of survey development, design, implementation, and data collection have previously been outlined [5, 12]. A copy of the survey is available online and can be found at: http://links.lww.com/TA/A93. Ethics approval was obtained from the Conjoint Health Research Ethics Board at the University of Calgary.

The original study included a voluntary, web-based cross-sectional survey of 330 trauma centre program leaders between March 10, 2009 and June 19, 2009 in the United States, Canada, Australia, and New Zealand. The survey collected information on trauma centre level of care designation, geographic location, teaching status, number and type of injured patients managed, nature of their quality improvement program, and quality indicators utilized [5]. The Internet was also searched for quality improvement data on surveyed trauma centre websites. Analyses were performed with trauma centres classified into two self-reported groups: teaching (university based teaching setting or university affiliated teaching setting) and non-teaching (non-teaching setting). Trauma centres were categorized as high volume according to American College of Surgeons (ACS) annual volume requirements for a Level I centre of at least 1,200 patients with any Injury Severity Score (ISS) [13], and at least 240 patients with ISS >15 [1].

Statistical analysis

The strategy for the primary analysis was to describe and compare the quality improvement programs of trauma centres according to teaching status. Medians were used when distributions were skewed and contained several outliers. Comparisons of dichotomous responses and derivation of confidence intervals for teaching versus non-teaching trauma centres were performed using the Chi-squared test. In order to assess for effect measure modification/subgroup differences, we stratified these dichotomous outcomes by trauma centre accreditation/verification, ACS level of care designation, geographic location, volume, median household income of the surrounding neighborhoods, and the number of patients assessed yearly. The Mann–Whitney U-test was used for comparisons of data summarized using medians. Statistical analyses were conducted using Stata version 10 (Stata Corp, San Antonio, TX).

Results

The survey was sent to 330 trauma centres (263 in the United States, 46 in Canada, 18 in Australia, and 3 in New Zealand) between March 10, 2009, and June 19, 2009, and 251 (76%) responded. Of the 251 centres that responded, 174 (69%) were teaching and 75 (30%) were non-teaching centres, and 2 (<1%) could not have their teaching status classified due to missing data. All the participating trauma centres reported using a trauma registry. The characteristics of the trauma centres responding to the survey are summarized in Table 1 stratified according to teaching status.

Table 1 Characteristics of trauma centres participating in survey as stratified by teaching status *

Performance measurement

The content of the quality indicators and the Institute of Medicine dimensions of care evaluated by the quality indicators are summarized in Table 2. With respect to the content of the quality indicators, teaching centres were more likely to use indicators for evaluating treatment (18% vs. 14%, p < 0.001) and non-teaching centres more likely to use indicators evaluating triage and patient flow (15% vs. 18%, p < 0.001). With respect to the Institutes of Medicine dimensions of care, teaching centres were more likely to use indicators evaluating timeliness of care (23% vs. 20%, p < 0.001) and non-teaching centres were more likely to use indicators evaluating efficiency of care (25% vs. 30%, p < 0.001).

Table 2 Quality indicator use according to teaching status

The top 10 quality indicators used by teaching centres compared to non-teaching centres are summarized in Table 3. Seven of the top 10 quality indicators were common to both teaching and non-teaching centres. All 10 indicators were used by over half of the teaching centres and more than 80% of teaching centres used the following four indicators: time to laparotomy, pulmonary complications, in hospital mortality, appropriate admission service/physician. Conversely, three indicators were used by over half of the non teaching centres with the most common, in hospital mortality used by 57% of non-teaching centres.

Table 3 Top 10 quality indicators used by teaching vs. non-teaching centres *

Quality improvement

The quality improvement practices according to trauma centre teaching status are summarized in Table 4. Trauma centre quality improvement practices appeared to be similar across centres of different teaching status. The majority of teaching versus non-teaching centres reported using morbidity and mortality conferences (96% vs. 97%, p = 0.61), quality of care audits (94% vs. 88%, p = 0.08) and both internal (79% vs. 77%, p = 0.77) and external (76% vs. 69%, p = 0.22) benchmarking. Approximately half of teaching and non-teaching centres (51% vs. 43%, p = 0.22) reported using report cards. Teaching centres were more likely to participate in research.

Table 4 Quality improvement practices according to teaching status *

Subgroup analyses

The results were similar when stratified by trauma centre accreditation/verification, ACS level of care designation, geographic location, volume, median household income of the surrounding neighborhoods, and the number of patients assessed yearly.

Discussion

Teaching and non-teaching centres reported being engaged in quality improvement and reported similar quality improvement activities. Small differences in the types of quality indicators used by centres were observed according to teaching status. Teaching centres were more likely to use indicators evaluating treatment and timeliness of care, while non-teaching centres were more likely to use indicators evaluating triage and patient flow as well as efficiency of care. Teaching centres were more likely to use the same indicators than non-teaching centres.

Present medical literature suggests that there does not appear to be any differences in patient outcomes in teaching versus non-teaching environments [8]. The literature is, however, limited by its observational nature, heterogeneity, and overall low quality [8]. With respect to quality improvement programs, there is little known about differences between teaching and non-teaching centres. One study has suggested that teaching centres have better quality of care measures than non-teaching centres in terms of processes of care and other non-mortality outcomes [10].

Interestingly, there were few large differences documented between teaching and non-teaching centres in our study despite potentially important differences in their characteristics (e.g., level of designation, geographical location, surrounding neighborhoods, number and nature of patients). It is conceivable that because the ACS mandates accredited trauma centres to partake in quality improvement activities, this leads to some homogeneity across institutions in the overall strategies. Previously published work describes in greater detail the quality indicators (QIs) that trauma centers use for quality measurement and performance improvement [5, 14]. However, there appear to be small but potentially important differences in trauma centre performance measurement and quality improvement between teaching and non-teaching centres.

The results of our study paralleled those from a previous study analyzing trauma centre volume and quality improvement programs [12]. As would be expected, non-teaching centres were more likely to be low-volume centres located in suburban and rural settings with a higher proportion of middle-income neighbourhoods surrounding these hospitals. Teaching centres were more likely to be high-volume centres located in urban settings with a higher proportion of lower income neighbourhoods surrounding these hospitals.

Although teaching status segregated closely with volume status there were differences noted when stratifying by each of these categories. Teaching centres were more likely to use indicators for evaluating treatment and timeliness of care whereas high volume centres placed a greater emphasis on measurement of medical errors and adverse events, the use of guidelines and protocols, and employing report cards and benchmarking as quality improvement tools [12]. Non-teaching centres were more likely to use indicators to evaluate triage and patient flow and efficiency of care. Low volume centres measured the same quality indicators but in addition they were also more likely to measure effectiveness of care [12].

The top 10 quality indicators were more likely to be used by teaching centres relative to non-teaching centres (>80% of teaching centres used time to laparotomy, pulmonary complications, in hospital mortality, appropriate admission service/physician). The quality indicators used in teaching centres versus non-teaching centres may reflect patterns specific to the volume of patients they each encounter and the types of services available. Thus quality indicator use is targeted to local quality of care challenges. For instance, 68% of teaching centres measured time to craniotomy whereas this was not one of the top 10 quality indicators for non-teaching centres; perhaps an indication of higher volume centres having the availability of neurosurgical services. On the other hand, 41% of non-teaching centres measured trauma team activation for severely injured patients whereas this was not one of the top 10 quality indicators for teaching centres; perhaps a reflection of challenges faced in smaller volume centres with the consistency with trauma team activation.

This study has several limitations, including its reliance on volunteer survey participants whose quality improvement activities may differ from centres that did not participate in the survey, the simplicity of the survey (high level description of quality improvement activities), and the lack of patient outcome data relating to morbidity and mortality. Differences in performance measurement and quality improvement could be associated with patient outcomes and warrants further evaluation. Moreover, as we conducted multiple statistical tests, one or more of our observed associations could have been due to chance alone. Further studies should assess the relative importance of the different facets of quality improvement on patient outcomes and how they interact with institutional characteristics so that professional trauma organizations can accurately recommend the best quality improvement processes.

Conclusions

Our study provides the first examination of trauma centre quality improvement programs according to trauma centre teaching status. Our data indicates that most trauma centres are engaged in quality improvement employing a diverse range of performance measures and improvement strategies. However, there appear to be small but potentially important differences in trauma centre performance measurement and quality improvement according to trauma teaching status.

Abbreviations

ACS:

American College of Surgeons

ISS:

Injury severity score

CIHR:

Canadian Institutes of Health Research

AIHS:

Alberta Innovates Health Solution.

References

  1. Committee on Trauma American College of Surgeons: Resources for Optimal Care of the Injured Patient 2006. 2006, Chicago, IL: Committee on Trauma American College of Surgeons

    Google Scholar 

  2. Juillard CJ, Mock C, Goosen J, Joshipura M, Civil I: Establishing the evidence base for trauma quality improvement: a collaborative WHO-IATSIC review. World J Surg. 2009, 33 (5): 1075-1086. 10.1007/s00268-009-9959-8.

    Article  PubMed  Google Scholar 

  3. Maier RV, Rhodes M: Trauma performance improvement. Injury Control. Edited by: Rivara FP, Cummings P, Koepsell TD. 2009, Cambridge: Cambridge University Press

    Google Scholar 

  4. Stelfox HT, Straus SE, Nathens A, Bobranska-Artiuch B: Evidence for quality indicators to evaluate adult trauma care: a systematic review. Crit Care Med. 2011, 39 (4): 846-859. 10.1097/CCM.0b013e31820a859a.

    Article  PubMed  Google Scholar 

  5. Stelfox HT, Straus SE, Nathens A, Gruen RL, Hameed SM, Kirkpatrick A: Trauma center quality improvement programs in the United States, Canada, and Australasia. Ann Surg. 2012, 256 (1): 163-169. 10.1097/SLA.0b013e318256c20b.

    Article  PubMed  Google Scholar 

  6. Demetriades D, Martin M, Salim A, Rhee P, Brown C, Chan L: The effect of trauma center designation and trauma volume on outcome in specific severe injuries. Ann Surg. 2005, 242 (4): 512-517. discussion 517–519

    PubMed  PubMed Central  Google Scholar 

  7. Nathens AB, Jurkovich GJ, Maier RV, Grossman DC, MacKenzie EJ, Moore M, Rivara FP: Relationship between trauma center volume and outcomes. JAMA. 2001, 285 (9): 1164-1171. 10.1001/jama.285.9.1164.

    Article  CAS  PubMed  Google Scholar 

  8. Papanikolaou PN, Christidi GD, Ioannidis JP: Patient outcomes with teaching versus nonteaching healthcare: a systematic review. PLoS Med. 2006, 3 (9): e341-10.1371/journal.pmed.0030341.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Todd SR, Arthur M, Newgard C, Hedges JR, Mullins RJ: Hospital factors associated with splenectomy for splenic injury: a national perspective. J Trauma. 2004, 57 (5): 1065-1071. 10.1097/01.TA.0000103988.66443.0E.

    Article  PubMed  Google Scholar 

  10. Kupersmith J: Quality of care in teaching hospitals: a literature review. Acad Med. 2005, 80 (5): 458-466. 10.1097/00001888-200505000-00012.

    Article  PubMed  Google Scholar 

  11. Stelfox HT, Bobranska-Artiuch B, Nathens A, Straus SE: Quality indicators for evaluating trauma care: a scoping review. Arch Surg. 2010, 145 (3): 286-295. 10.1001/archsurg.2009.289.

    Article  PubMed  Google Scholar 

  12. Stelfox HT, Khandwala F, Kirkpatrick AW, Santana MJ: Trauma center volume and quality improvement programs. J Trauma Acute Care Surg. 2012, 72 (4): 962-967.

    Article  PubMed  Google Scholar 

  13. Baker SP, O'Neill B, Haddon W, Long WB: The injury severity score: a method for describing patients with multiple injuries and evaluating emergency care. J Trauma. 1974, 14 (3): 187-196. 10.1097/00005373-197403000-00001.

    Article  CAS  PubMed  Google Scholar 

Pre-publication history

Download references

Acknowledgements

We thank all of the trauma centre medical directors and program managers who participated in this study. We thank Nancy Clayden for her help with survey administration and Farah Khandwala for help with statistical analyses.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vikas P Chaubey.

Additional information

Competing interests

VPC has received a conference travel grant from Merck Frosst Canada Inc. for previously completed work. This work was supported by Partnerships in Health System Improvement Grant (PHE-91429) from the Canadian Institutes of Health Research (CIHR) and Alberta Innovates – Health Solutions (AIHS). DJR is supported by an AIHS Clinician Fellowship Award and funding from the Clinician Investigator and Surgeon Scientist Programs at the University of Calgary. NJHB is supported by an AIHS Graduate Studentship. HTS is supported by a New Investigator Award from CIHR and a Population Health Investigator Award from AIHS.

Authors’ contributions

VPC, DJR and HTS contributed to the study design, literature review, data analysis/interpretation, drafting of the manuscript and making critical revisions. NJHB and MBF contributed towards the study design, data analysis/interpretation and making critical revisions. All authors read and approved the final mauscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chaubey, V.P., Roberts, D.J., Ferri, M.B. et al. Quality improvement practices used by teaching versus non-teaching trauma centres: analysis of a multinational survey of adult trauma centres in the United States, Canada, Australia, and New Zealand. BMC Surg 14, 112 (2014). https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2482-14-112

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/1471-2482-14-112

Keywords