Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Clinical indicators for common paediatric conditions: Processes, provenance and products of the CareTrack Kids study

  • Louise K. Wiles,

    Roles Data curation, Formal analysis, Writing – original draft, Writing – review & editing

    Affiliations Australian Centre for Precision Health, School of Health Sciences, Cancer Research Institute, University of South Australia, Adelaide, South Australia, Australia, Centre for Healthcare Resilience and Implementation Science, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia, South Australian Health and Medical Research Institute (SAHMRI), Adelaide, South Australia, Australia

  • Tamara D. Hooper,

    Roles Data curation, Writing – original draft, Writing – review & editing

    Affiliations Australian Centre for Precision Health, School of Health Sciences, Cancer Research Institute, University of South Australia, Adelaide, South Australia, Australia, Centre for Healthcare Resilience and Implementation Science, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia, South Australian Health and Medical Research Institute (SAHMRI), Adelaide, South Australia, Australia

  • Peter D. Hibbert,

    Roles Conceptualization, Data curation, Funding acquisition, Methodology, Project administration, Writing – original draft, Writing – review & editing

    Affiliations Australian Centre for Precision Health, School of Health Sciences, Cancer Research Institute, University of South Australia, Adelaide, South Australia, Australia, Centre for Healthcare Resilience and Implementation Science, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia, South Australian Health and Medical Research Institute (SAHMRI), Adelaide, South Australia, Australia, Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia, Australian Patient Safety Foundation, Adelaide, South Australia, Australia, Centre for Health Informatics, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia

  • Charlotte Molloy,

    Roles Writing – original draft, Writing – review & editing

    Affiliations Australian Centre for Precision Health, School of Health Sciences, Cancer Research Institute, University of South Australia, Adelaide, South Australia, Australia, Centre for Healthcare Resilience and Implementation Science, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia

  • Les White,

    Roles Conceptualization, Funding acquisition, Methodology

    Affiliations Centre for Healthcare Resilience and Implementation Science, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia, Discipline of Paediatrics, School of Women’s and Children’s Health, University of New South Wales, Sydney, New South Wales, Australia, Sydney Children’s Hospital, Sydney Children’s Hospitals Network, Randwick, Sydney, New South Wales, Australia, New South Wales Ministry of Health, North Sydney, Sydney, New South Wales, Australia

  • Adam Jaffe,

    Roles Conceptualization, Funding acquisition, Methodology, Writing – review & editing

    Affiliations Discipline of Paediatrics, School of Women’s and Children’s Health, University of New South Wales, Sydney, New South Wales, Australia, Department of Respiratory Medicine, Sydney Children’s Hospital, Sydney Children’s Hospitals Network, Randwick, Sydney, New South Wales, Australia

  • Christopher T. Cowell,

    Roles Conceptualization, Funding acquisition, Methodology, Writing – review & editing

    Affiliations Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia, Institute of Endocrinology and Diabetes, Children’s Hospital at Westmead, Sydney Children’s Hospitals Network, Westmead, Sydney, New South Wales, Australia

  • Mark F. Harris,

    Roles Conceptualization, Funding acquisition, Methodology, Writing – review & editing

    Affiliation Centre for Primary Health Care and Equity, Faculty of Medicine, University of New South Wales, Sydney, New South Wales, Australia

  • William B. Runciman,

    Roles Methodology, Writing – review & editing

    Affiliations Australian Centre for Precision Health, School of Health Sciences, Cancer Research Institute, University of South Australia, Adelaide, South Australia, Australia, Centre for Healthcare Resilience and Implementation Science, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia, South Australian Health and Medical Research Institute (SAHMRI), Adelaide, South Australia, Australia, Centre for Health Systems and Safety Research, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia, Australian Patient Safety Foundation, Adelaide, South Australia, Australia

  • Annette Schmiede,

    Roles Writing – review & editing

    Affiliation BUPA Health Foundation Australia, Sydney, New South Wales, Australia

  • Chris Dalton,

    Roles Writing – review & editing

    Affiliation BUPA Health Foundation Australia, Sydney, New South Wales, Australia

  • Andrew R. Hallahan,

    Roles Writing – review & editing

    Affiliation Children’s Health Queensland Hospital and Health Service, South Brisbane, Brisbane, Queensland, Australia

  • Sarah Dalton,

    Roles Writing – review & editing

    Affiliations New South Wales Ministry of Health, North Sydney, Sydney, New South Wales, Australia, New South Wales (NSW) Agency for Clinical Innovation (ACI), Chatswood, Sydney, New South Wales, Australia

  • Helena Williams,

    Roles Writing – review & editing

    Affiliations Russell Clinic, Blackwood, Adelaide, South Australia, Australia, Australian Commission on Safety and Quality in Health Care, Sydney, New South Wales, Australia, Southern Adelaide Local Health Network, Bedford Park, Adelaide, South Australia, Australia, Cancer Australia, Surry Hills, Sydney, New South Wales, Australia, Adelaide Primary Health Network, Mile End, Adelaide, South Australia, Australia, Country SA Primary Health Network, Nuriootpa, Adelaide, South Australia, Australia

  • Gavin Wheaton,

    Roles Writing – review & editing

    Affiliation Division of Paediatric Medicine, Women’s and Children’s Health Network, Adelaide, South Australia, Australia

  • Elisabeth Murphy,

    Roles Writing – review & editing

    Affiliation New South Wales Ministry of Health, North Sydney, Sydney, New South Wales, Australia

  •  [ ... ],
  • Jeffrey Braithwaite

    Roles Conceptualization, Funding acquisition, Methodology, Supervision, Writing – original draft, Writing – review & editing

    jeffrey.braithwaite@mq.edu.au

    Affiliation Centre for Healthcare Resilience and Implementation Science, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales, Australia

  • [ view all ]
  • [ view less ]

Abstract

Background

In order to determine the extent to which care delivered to children is appropriate (in line with evidence-based care and/or clinical practice guidelines (CPGs)) in Australia, we developed a set of clinical indicators for 21 common paediatric medical conditions for use across a range of primary, secondary and tertiary healthcare practice facilities.

Methods

Clinical indicators were extracted from recommendations found through systematic searches of national and international guidelines, and formatted with explicit criteria for inclusion, exclusion, time frame and setting. Experts reviewed the indicators using a multi-round modified Delphi process and collaborative online wiki to develop consensus on what constituted appropriate care.

Results

From 121 clinical practice guidelines, 1098 recommendations were used to draft 451 proposed appropriateness indicators. In total, 61 experts (n = 24 internal reviewers, n = 37 external reviewers) reviewed these indicators over 40 weeks. A final set of 234 indicators resulted, from which 597 indicator items were derived suitable for medical record audit. Most indicator items were geared towards capturing information about under-use in healthcare (n = 551, 92%) across emergency department (n = 457, 77%), hospital (n = 450, 75%) and general practice (n = 434, 73%) healthcare facilities, and based on consensus level recommendations (n = 451, 76%). The main reason for rejecting indicators was ‘feasibility’ (likely to be able to be used for determining compliance with ‘appropriate care’ from medical record audit).

Conclusion

A set of indicators was developed for the appropriateness of care for 21 paediatric conditions. We describe the processes (methods), provenance (origins and evolution of indicators) and products (indicator characteristics) of creating clinical indicators within the context of Australian healthcare settings. Developing consensus on clinical appropriateness indicators using a Delphi approach and collaborative online wiki has methodological utility. The final indicator set can be used by clinicians and organisations to measure and reflect on their own practice.

Introduction

Despite efforts aimed at achieving quality, equity, and sustainability of healthcare systems[15], gaps remain between the care that is recommended (appropriate care, in line with evidence and/or clinical practice guidelines (CPGs)) and that which is delivered[6, 7]. To prioritise resources and develop strategies to address the inappropriateness of and variations in care[8], national measurement and monitoring activities are needed to capture what population-level care is given, and by whom[912]. Internationally, clinical standards and indicators are increasingly being used to identify gaps and areas for improvement, and understand and measure the quality of healthcare provided[1322]. Data evaluating appropriateness of healthcare for children is limited, especially at population level[23, 24].

Interventions aimed at delivering care in line with CPGs mostly report limited or variable success[2531]. However, there is some evidence that both compliance with accepted care processes and favourable clinical outcomes are possible[26, 27, 3235], and may be facilitated by multi-faceted nationally-based initiatives using clinical indicator-based adherence approaches coupled with audit and feedback[3641].

Clinical indicators can be developed using one of three main systematic approaches[4244]: evidence, such as using scientific data from clinical trials[4547]; combining evidence with consensus, such as a Delphi technique[17, 48] or RAND appropriateness method[23, 49, 50]; and CPG-driven derivation from recommendations in current CPGs[19, 51, 52]. While the merits and demerits of different clinical indicator development methods have been explored[5356], the trend for contemporary indicator development centres on employing hybrid approaches[22, 57, 58].

As interpretation of performance measured by clinical indicators can have far-reaching consequences (e.g. public reporting, pay-for-performance), it is important to ensure that they are developed in a way that reflects what is reasonably expected of clinicians[57]. For example, for the purposes of a medical record audit, this may be best achieved through combining consensus-based (i.e. stakeholder perspectives / expert opinion) and CPG-driven (i.e. recommendations to clinicians to guide care). The Delphi technique, a structured process comprised of several rounds of review to gather stakeholder perspectives until consensus is reached[17], can be used to vet recommendations derived from CPGs to develop healthcare quality indicators[17]. While a number of studies have used combined methods to create clinical indicators, of these most focus on a single condition[58, 59], clinical area or care process[60, 61] or healthcare setting[62, 63].

Built on the findings and experience of the CareTrack Australia study[11, 51], the objective of the CareTrack Kids study was to determine the appropriateness of healthcare delivered to children in Australia for common conditions[64]. This paper describes a core component of the CareTrack Kids study; the development of a set of clinical appropriateness indicators for common paediatric conditions for use across a range of healthcare practice facilities, including primary care provided by general practitioners, secondary care provided by outpatient paediatricians and tertiary care in hospitals[65]. Our method married recognised Delphi processes[17] with a collaborative online wiki[66] to achieve consensus on what constitutes appropriate care, using CPGs as a primary information source. We report on this indicator development; detailing the processes (panel recruitment; methods of indicator development and criteria for selection), provenance (origin and evolution of original recommendations and indicators throughout the study including reasons for exclusion) and products (characteristics of the final set of indicators including linking these to evidence levels and grades of recommendations).

Methods

The three components of our indicator development work[65] were to (1) identify and select common candidate paediatric conditions (presented in our study protocol[65]), (2) develop clinical indicators representative of “appropriate care” for these conditions, and (3) refine them for feasibility, applicability and utility. Our approach has been described in our study protocol[65]. Terms used in CTK and their definitions are presented in Box 1. This study was approved by the Macquarie University Human Research Ethics Committee (protocol 5201401120).[65]

Box 1. Definitions of terms used in the CareTrack Kids study.

A clinical practice guideline (CPG):

“Statements that include recommendations intended to optimise patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.”[67, 68]

A clinical standard[11]:

  • is an agreed process that should be undertaken or an outcome that should be achieved for a particular circumstance, symptom, sign or diagnosis (or a defined combination of these)
  • should be evidence-based, specific, feasible to apply, easy and unambiguous to measure, and produce a clinical benefit and/or improve the safety and/or quality of care, at least at the population level.

If a standard cannot or should not be complied with, the reason/s should be briefly stated.

A clinical indicator[11]:

  • describes a measurable component of the standard, with explicit criteria for inclusion, exclusion, time frame and setting.

A clinical tool[11, 6972]:

  • should implicitly or explicitly incorporate a standard or a component of a standard
  • should constitute a guide to care that facilitates compliance with the standard
  • should be easy to audit, preferably electronically, to provide feedback
  • should be able to be incorporated into workflows and medical records.

Appropriate care[11, 51, 73]:

  • care in line with evidence-based or consensus-based guidelines

A wiki[66, 74, 75]

  • is an interactive information management system which will allow users (e.g. healthcare professionals who register for, and login to the wiki) to collaborate directly in formulating and refining indicators that are relevant to their clinical practice and lived experience.

Underuse[76]

“Failure to deliver a service that is highly likely to improve the quality or quantity of life, that represents good value for money, and that patients who were fully informed of its potential benefits and harms would have wanted.”

Overuse[76]

“Provision of a service that is unlikely to increase the quality or quantity of life, that poses more harm than benefit, or that patients who were fully informed of its potential benefits and harms would not have wanted.”

We initially identified 21 common paediatric conditions (Box 2). Clinical indicators representative of their appropriate care were developed, using a four-stage process:

  1. systematic search and source relevant CPGs;
  2. select, draft and format proposed clinical indicators;
  3. review indicators internally and externally (via a modified Delphi approach); and
  4. refine and convert indicators to individual medical record audit indicator items suitable for use in a wide range of circumstances[64].

Box 2. Paediatric conditions included in the indicator development process in the CareTrack Kids study (n = 21)

           Acronym           Condition

           ABDO           Acute abdominal pain

+           ADHD           Attention Deficit Hyperactivity Disorder

               AGE           Acute gastroenteritis

+           ANXI           Anxiety

+           ASTH           Asthma

               AUTI           Autism

               BRON           Acute bronchiolitis

               CROU           Croup

+           DEPR           Depression

+           DIAB           Diabetes

               ECZE           Eczema

               FEVE           Fever

               GORD           Gastro–Oesophageal Reflux Disease

+           HEAD           Head injury

+           OBES           Obesity

               OTIT           Otitis media

+           PREV           Preventive care

               SEIZ           Seizures

               TONS           Tonsillitis

               URTI           Upper Respiratory Tract Infection

               URIN           Urinary Tract Infection

+           NHPA           National Health Priority Area

Stage 1 Search and source relevant CPGs

Clinical indicators were derived from published CPGs relevant for 2012 and 2013. A systematic search was undertaken, in order of priority, for national-level CPGs from Australia (e.g. from the National Health and Medical Research Council (NHMRC))[77], and internationally[7880]. In the absence of Australian, national or international CPGs, those from relevant professional medical colleges and associations were examined, as well as those from state jurisdictional bodies or professional groups [81, 82]. Three Research Group members (LKM, TDH, PDH) conducted the CPG searches and developed the initial set of clinical indicators. Full details of the search strategy are provided in online S1 Table of our protocol paper[65].

Stage 2 Select, draft and format the proposed indicators

Recommendations from each CPG were extracted verbatim, along with their documented grade or level of evidence, and compiled in a Microsoft Excel spreadsheet. In instances where there was more than one guideline for a recommendation, we recorded all grades and levels of evidence. Similar recommendations across CPGs were grouped together to minimise duplication. Not all published recommendations became indicators; using our experience of developing and ‘field-testing’ 522 indicators in the CareTrack Australia (adult) study[51], we applied a set of exclusion criteria (Table 1) based on:

  • Strength/certainty of the wording of the recommendation (i.e. “may”, “could” and “consider” statements were excluded)
  • Low likelihood of information being documented in the medical record
  • Guiding statements without recommended actions
  • Out of scope of the CTK study (i.e. “structure-level” recommendations aimed at attributes of the settings in which care is delivered[12, 43]).
thumbnail
Table 1. Examples of current recommendations which would meet exclusion criteria.

https://doi.org/10.1371/journal.pone.0209637.t001

All clinical indicators were written in a structured and standardised format, commencing with the inclusion criteria followed by the compliance action[51]. For example, the inclusion criteria defined the age (infant, child, adolescent), condition, and the phase of care (at diagnosis/presentation or “with”, indicating the diagnosis is existing). The compliance action defined the recommended appropriate care. Indicators were arranged chronologically according to phases of care (Table 2).

thumbnail
Table 2. Examples of clinical indicators from CareTrack Australia [51] that were written in a structured and standardised format.

https://doi.org/10.1371/journal.pone.0209637.t002

Stage 3 Subject the indicators to several rounds of internal and external review

There were two stages involved in the indicator review (Fig 1). The proposed clinical indicators were subjected to an internal review (Stage 3a), followed by an external wiki style review (Stage 3b) using a modified Delphi process[87]. This multi-round, multi-modality approach aimed to enhance methodological rigor and optimise consensus with respect to the content and face validity of the final set of clinical indicators [17]. We conducted three rounds for the internal review (Stage 3a), and two rounds in the external review (Stage 3b).

thumbnail
Fig 1. Overview of the internal and external indicator review process.

https://doi.org/10.1371/journal.pone.0209637.g001

Stage 3a Internal review processes

In accordance with selection strategies employed within the Delphi process literature[17, 88], internal reviews were conducted by paediatricians and general practitioners who were identified by the research team and their professional networks. Clinical Champions, who led the internal review panel members, were employed as the head or director of a relevant paediatric department in a large hospital, held at least an adjunct academic appointment or were directly involved in clinical care. The clinical indicators for each condition were reviewed via email by a panel of at least three reviewers. Reviewers completed their assignments independently to minimise bias from “group-think”[89, 90].

The review criteria were based on methods developed in previous US and Australian studies[23, 49, 51]. Internal reviewers were asked to: score each indicator using one of three responses (yes, no, not applicable to area of expertise or clinical setting) against three key criteria; acceptability, feasibility and impact (Box 3)[65]; to recommend the indicators for inclusion or exclusion; and provide any additional comments. Research Group members (LKW, TDH, PDH) collated the feedback and revised the content, structure, and format of each indicator between review rounds.

Box 3. Information for scoring criteria for clinical indicators[65]

Acceptability (A)

  • Level of evidence or grade of recommendations. In some instances a level of evidence or grade of recommendation may not have been provided. In these cases, absence of evidence should not be the only grounds for exclusion of the indicator (i.e. expert consensus may be acceptable).
  • Non-Australian clinical guideline recommendations. There are some indicators where the primary source is a non-Australian clinical guideline from a reputable organisation (e.g. NICE). In the absence of Australian guidelines, it is important to consider whether such a guideline reflects what is practical within the context of Australian healthcare settings.
  • Non-national Australian clinical guideline recommendations. In the absence of nationally-based Australian AND international guidelines, some indicators have been sourced using guidelines from one state or organisation e.g. NSW Health, or Royal Children’s Hospital in Melbourne.
  • Recommendation is made in more than one clinical guideline.
  • Reflects “essential” (i.e. independent of resources) Australian clinical practice during 2012 and 2013.

Feasibility (F)

  • Indicators with multiple eligibility criteria tend to have lower numbers of eligible encounters.
  • Compliance can be determined preferentially from one encounter with one healthcare provider, or at least within a 1–2 year period (our sample will be the medical records of healthcare encounters for children during the 2012–2013 period).
  • Likely to be documented in the medical record, for example: indicators associated with lifestyle or exercise advice are less likely to be documented.

Impact (I)

  • “High impact” on the patient in terms of domains of quality i.e. safety, effectiveness, patient experience, or access.
  • “High impact” within Australian healthcare settings (e.g. what will be the frequency/ prevalence of presentation).

Stage 3b external wiki-based review

External reviews were conducted by invited paediatricians and general practitioners. Relevant medical colleges, professional associations and networks were contacted, requesting assistance with the recruitment of clinical experts to register as external reviewers. Invitations comprised direct email requests to members, media releases and articles within newsletters. Clinical experts self-nominated as reviewers for one or more of the CTK conditions based on their interest, scope of practice and clinical experience[17, 88]. All reviewers were required to complete a Conflict of Interest (COI) declaration[9193]; COIs were recorded for each reviewer and managed according to the NHMRC protocol[94].

Indicators for each condition (from round three of the internal review) were posted to an online wiki site. The aim was for each condition to be independently reviewed by a minimum of nine experts. In addition to the scoring criteria used in the internal review process, indicators were scored on a nine-point Likert scale as representative of appropriate care delivered to children during 2012 and 2013[23, 51, 95]. With the support of a Research Group member as a wiki site Administrator, the Clinical Champion for each condition followed-up and managed external reviewers’ responses, and made final recommendations regarding the inclusion, content, structure and format of indicators (Table 3). For most conditions, the Clinical Champion’s role was undertaken by one of the Stage 3a internal reviewers. In the second wiki round, experts had access to de-identified comments from the first round.

thumbnail
Table 3. Clinical champion management of external reviewers’ responses.

https://doi.org/10.1371/journal.pone.0209637.t003

Consensus business rules.

For the Stage 3 internal and external reviews, consensus was defined as the majority agreeing to include or exclude; when a clear majority was not able to be achieved we opted to retain the indicator and subject it to additional feedback over subsequent rounds of review. In order to facilitate consensus, the Research Group and wiki Administrator and Clinical Champion used comments fields to provide indicator reviewers with a summary of the feedback obtained to date, where relevant

Stage 4 Refine and convert indicators to individual medical record audit indicator items

During the Stage 4 refinement process, we flagged indicators with an appropriateness score of less than 7, or more than three inclusion criteria, as these are likely to have lower prevalence, and sought the condition Clinical Champion’s approval for their exclusion. For all indicators approved for inclusion, the Research Group converted each inclusion criterion and compliance action into an individual medical record audit indicator item and formatted these such that “not applicable” (i.e. medical record did not meet the indicator’s inclusion criteria) or binary responses (yes/no) could be recorded (S1 Table). We also analysed the final set of indicators to ensure all phases of care were covered, and that the relevant indicators were ‘feasible’ for the main medical record audit[64]. Each Clinical Champion checked the individual medical record audit indicator items for their nominated condition to ensure the ‘spirit’ of the original recommendations and reviewers’ feedback from previous rounds had been accurately captured.

Data analysis

Reviewers’ scores and comments from the internal (manually entered) and external (exported from the wiki) reviews were entered into Microsoft Excel (2013) spreadsheets. Analysis and reporting of study data is in the form of descriptive statistics for the resultant processes, provenance and products.

Results

Processes

A panel of 24 experts completed the internal review; each condition was allocated at least three reviewers, and each reviewer undertook reviews for no more than three conditions. For the external review, 79 participants registered and were approved for the wiki site; 37 (47%) undertook the Round 1 review for their nominated condition(s), and 24 (30%) went on to complete Round 2. The demographic characteristics of indicator reviewers are presented in Table 4. In the external review, there was a mean of 5 (SD 2.7) reviewers per condition (range 1–14; median 4) (S1 Table).

Provenance

We identified 113 relevant CPGs with supporting references, from which 1432 original recommendations were extracted. Over one-fifth of extracted recommendations (n = 334, 23%) were initially excluded by the Research Group (S2 Table). The information contained in some of these exclusions was covered in other recommendations which were included in our sample (n = 86 of 334, 26% of guiding statements; and n = 3, 4% of those excluded due to strength/certainty of wording) (S2 Table). In addition, a small proportion of the excluded guiding statements (n = 3, 1%) were incorporated into definitions which were to be provided to research surveyors to assist them in completing the medical record audit.

The remaining original recommendations (n = 1098) were used to draft 451 proposed indicators for circulation among the internal review panel members (Fig 2). Following three rounds of internal review, almost half were rejected (n = 206, 46%) mainly due to concerns around the feasibility of capturing indicator information by way of a medical record audit (e.g. likelihood of the necessary documentation being present) (S3 Table). During the internal review, 245 indicators were approved for posting to the wiki for the external review, together with twenty-one ‘new’ indicators (S4 Table) developed by splitting existing indicators which contained more than one eligibility criterion and/or compliance action. The external review yielded 257 indicators (97%), with the main reasons for exclusion mirroring those from the internal review (Fig 2). S5 Table and Fig 3 present, by condition, the evolution of numbers of indicators over the development process, from original recommendations to the final indicators and medical record audit indicator items.

thumbnail
Fig 2. Provenance of CPGs, original recommendations, indicators and medical record audit indicator questions.

* some indicators were rejected for more than one reason ^ ‘appropriateness’ score less than seven out of nine.

https://doi.org/10.1371/journal.pone.0209637.g002

thumbnail
Fig 3. Evolution of the total number of indicators over the development process, from original recommendations to the final medical record audit indicators and indicator items.

https://doi.org/10.1371/journal.pone.0209637.g003

Products: Indicators and medical record audit indicator items

The final 234 indicators were used to develop 597 individual medical record audit indicator items for the medical audit review (Table 5, S1 Table). In terms of classification according to phase of care, most medical record audit indicator items related to ‘treatment’ (n = 273, 46%) (S6 Table). Most items were geared towards capturing information about under-use in healthcare (n = 551, 92%) across emergency department (n = 457, 77%), hospital (n = 450, 75%) and general practice (n = 434, 73%) healthcare facilities, and were based on consensus level recommendations (n = 451, 76%).

thumbnail
Table 5. Examples of indicators with multiple inclusion criteria and/or compliance actions being converted into individual medical record items.

https://doi.org/10.1371/journal.pone.0209637.t005

Discussion

To our knowledge, this is the first study to detail the processes, provenance and products of developing clinical indicators of appropriate care for a range of common paediatric conditions for use across Australian primary, secondary and tertiary healthcare practice facilities. Paediatric indicator development studies over the last decade have focused on fewer paediatric conditions[23, 96, 97] or specific types of illnesses and/or healthcare practice facilities[9699]. Our methodology was strengthened by employing a transparent, multi-stage and multi-modality modified-Delphi process which aimed to contextualise the recommendations published in CPGs (including scientific evidence) to the clinical setting (expert opinion). The Delphi procedure was reported in accordance with current recommendations[17] (S7 Table). Using our approach and definitions, we were able to achieve consensus on appropriate care for 21 paediatric conditions (in Australia for the years 2012–2013), and embody these in clinical indicators.

The main reason for excluding indicators was feasibility which included multiple eligibility criteria, compliance unlikely to be determined during a medical record audit, low likelihood of information being documented (Box 3, Fig 2), and is supported by internationally-recognised organisations[22] and literature[17]. A potential consequence of this is that using ‘feasibility’ as an eligibility criterion may serve to drag down the standard of measuring what is deemed appropriate care towards the care we are expecting to be documented rather than that which should be delivered. Recommendations were also excluded due to the strength/certainty of their wording (e.g. “may”, “could”, “consider” statements), which means that our indicator set did not cover aspects of care that may be influenced by situational factors and/or patient preferences; this presents a gap in their clinical utility. A first step to capturing information about these aspects of care is improving the detail and consistency of clinicians’ documentation (e.g. consideration of differential diagnoses, decision making based on situational factors and/or patient preferences). In the future, and as standardised electronic medical records become more commonplace and sophisticated, this may be facilitated by structured and mandatory fields of entry[11], as well as shared access and decision-making between patients and clinicians using integrated electronic apps and medical record software to inform, guide and record care; and especially variations in care as a result of situational factors or preferences [100, 101].

Application of the CTK indicators for research purposes has been described in the main results paper [102] and a condition-specific analysis for tonsillitis [103]. While originally developed for use in a large-scale research medical record audit, the CTK indicator set can be used by clinicians and organisations to measure and reflect on their own local practice (Table 6). In this way, data can be aggregated by individuals or groups of practitioners, hospital departments and local or jurisdictional health networks to determine baseline adherence with recommended care [104106], and identify and target practice gaps with professional development and other quality improvement activities. For aspects of care that are not covered by the current CTK indicator set, supplementary data collection methods such as case studies, patient satisfaction surveys, narrative-text analyses of clinical notes, and clinician/patient interviews may need to be considered[107].

thumbnail
Table 6. Guidance on the clinical application of the CTK indicators in a medical record audit.

https://doi.org/10.1371/journal.pone.0209637.t006

There are several caveats to our findings. First, the final set of clinical indicator items was based on recommendations in CPGs relevant for the years 2012–2013, with priority given to those published in Australia. While this limits the applicability and generalisability of the indicators beyond these contexts, they do provide a basis from which new indicators may be derived and adapted to local settings. We did not critically appraise the quality of included CPGs; for three conditions (i.e. acute abdominal pain, head injury, and preventive care) we were unable to identify CPGs where “a systematic review of evidence and an assessment of the benefits and harms of alternative care options” had been undertaken by the CPG developers to inform the recommendations[67, 68]. As a result, we accepted CPGs and protocols for managing care produced at state, national or local hospital level which means that there was little depth to the evidence base underpinning the indicator sets for these three conditions. Grades of recommendation and levels of evidence were recorded verbatim from included CPGs. CPGs did not consistently report sufficient information about the primary evidence or decision-making used to formulate recommendations to allow the author team to uniformly apply an established evidence grading system to extracted recommendations, such as GRADE[108]. As a reflection of the CPGs from which they were born, the majority of medical record audit indicator items were based on consensus-level recommendations and pertained to under-use (S6 Table). However, in recent years there has been growing awareness of ‘overuse’ of healthcare resources(6, 9, 76, 104, 105) and to perceive this as a source of not only waste but of healthcare related harm(11, 106). We found 45 (8%) of our indicator items sought to evaluate some aspect of over-use. A range of national standards and accreditation processes (e.g. Choosing Wisely(109)) are working to champion reduction of over-use and unwarranted healthcare variation(3, 14) and low value care(9, 15).

Second, the indicators represent the opinions of individuals who chose to participate in this study. Internal review panel members were non-randomly selected for invitation, and external reviewers were targeted through relevant medical colleges, professional associations and networks which may have skewed our sample or amplified any effects of self-selection bias. We met our goal of achieving at least nine external reviewers for only one (BRON) of the 21 conditions (S1 Table); attracting a lower critical mass of experts than expected may limit the face validity of our indicator sets when applied to the clinical setting (e.g. response bias, non-representative process measures, reduced endorsement and uptake in the wider community)[109, 110].Tempering this, our internal review panel had extensive clinical and quality improvement experience in paediatric care in Australia, and most external reviewers had university-based affiliations in addition to their clinical roles which may have assisted to refine the indicators in a manner underpinned by both scientific evidence and clinical experience. Importantly, paediatricians working in hospital settings dominated our expert panels; their review of clinical indicators geared towards capturing information about care provided in general practice may have lacked relevance. Patient and public involvement in guideline development aims to improve patient-centred health care provision, foster democratic healthcare policy-making, and enhance the quality of healthcare and related policy[111]; CTK indicators were developed without patient consultation, which is a limitation of our process.

Third, we did not formally evaluate the methodological rigor of our indicator development process with a validated quality appraisal tool, such as AIRE (Appraisal of Indicators through Research and Evaluation). While this paper reports on aspects related to the first three AIRE domains (Purpose, relevance and organizational context; Stakeholder involvement; Scientific evidence)[112], further details about the fourth AIRE domain (Additional evidence, formulation, usage) are available in the supplementary material of the results paper of the CareTrack Kids multistage stratified sample medical record review. Our development process did not involve reviewers meeting face to face. The use of online technologies was specifically chosen to enhance transparency, accessibility and timeliness of the development process and minimise “group-think”[69, 75]; however it could be argued that an opportunity for reviewers to meet may have stimulated useful discussion on contentious issues[97, 113]. While we did encourage experts to make comments (which were included in de-identified format with the next iterations of indicators presented to reviewers in subsequent rounds) in both Stages 3a and 3b, information from the internal review was not able to be conveyed to external reviews (due to project and wiki system constraints).

Based on our experience, and emerging standards around new approaches for evidence development[68], it is recommended that future clinical indicator developers look to further harness available technology such as wikis to help facilitate the rate at which consensus can be achieved, and to optimise its transparency (i.e. ability to capture discussion threads) and reach[11], and include patients within review panels to ensure their perspectives as key stakeholders are captured and considered[114, 115]. However, as an interim step and to address the issue of feasibility of measurement and clinical utility of indicators), qualitative research seeking insights from those who develop CPGs, indicators, and medical record software and tools[11, 6971] as well as users (e.g. clinicians, healthcare organisations), could help to bridge the gaps between what we consider to be appropriate care, how it may be relevantly documented, and used to evaluate the quality of clinical practice.

Conclusion

Findings from the modified Delphi approach presented in this study address recommendations for methodological rigor and transparency of reporting[17], and provide an inventory of our experiences and learnings from developing clinical indicators of appropriate care for common paediatric medical conditions. In a critical next step, these clinical indicators will form the criteria against which the CTK study can, for the first time in Australia, measure appropriateness of paediatric care in 2012 and 2013[64]. Our Delphi approach could be used by others to refine this suite of clinical indicators to local contexts to assist point-of-care decision-making, or providing a starting point for undertaking similar analyses of healthcare practices for benchmarking purposes.

Supporting information

S1 Table. CareTrack Kids final clinical indicators and items developed to assess compliance for 21 paediatric conditions.

https://doi.org/10.1371/journal.pone.0209637.s001

(DOCX)

S2 Table. CareTrack Kids excluded original recommendations from included clinical practice guidelines.

https://doi.org/10.1371/journal.pone.0209637.s002

(DOCX)

S3 Table. CareTrack Kids list of excluded clinical indicators.

https://doi.org/10.1371/journal.pone.0209637.s003

(DOCX)

S4 Table. CareTrack Kids ‘new’ indicators by way of splitting existing indicators mapped to their corresponding final medical record indicator items.

https://doi.org/10.1371/journal.pone.0209637.s004

(DOCX)

S5 Table. Evolution of numbers of indicators over the development process, from original recommendations to the final indicators and medical record audit indicator items, per condition.

https://doi.org/10.1371/journal.pone.0209637.s005

(DOCX)

S6 Table. Characteristics of included medical record audit indicator items.

https://doi.org/10.1371/journal.pone.0209637.s006

(DOCX)

S7 Table. Mapping of the CareTrack Kids approach to practical guidance for using and reporting Delphi procedures.

https://doi.org/10.1371/journal.pone.0209637.s007

(DOCX)

Acknowledgments

We would like to acknowledge the paediatricians and general practitioners who participated as internal reviewers: Dr James Best, Dr Phillip Coote, Dr Bronwyn Gould, Dr Kerry Hancock, Dr Keith Howard, Dr Jon Jureidini, Associate Professor Susan Moloney, Dr David Moore, Dr Joanne Morris, Dr Vanessa Sarkozy, Dr Ruth Selig, Dr John Wakefield, Dr Michael Wright, Dr Helen Young. We extend our thanks to Ms Anita Deakin and Mrs Kaye Dolman who assisted in preparing the manuscript for publication.

References

  1. 1. Bernal-Delgado E, Christiansen T, Bloor K, Mateus C, Yazbeck AM, Munck J, et al. ECHO: health care performance assessment in several European health systems. The European Journal of Public Health. 2015;25(suppl 1):3–7.
  2. 2. Carinci F, Van Gool K, Mainz J, Veillard J, Pichora EC, Januel JM, et al. Towards actionable international comparisons of health system performance: expert revision of the OECD framework and quality indicators. International Journal for Quality in Health Care. 2015;27(2).
  3. 3. NPS MedicineWise Choosing Wisely. Choosing Wisely internationally. 2016. Available from: http://www.choosingwisely.org.au/about-choosing-wisely-australia/international-choosing-wisely-initiatives.
  4. 4. Lee ES, Vedanthan R, Jeemon P, Kamano JH, Kudesia P, Rajan V, et al. Quality improvement for cardiovascular disease care in low-and middle-income countries: a systematic review. PLOS ONE. 2016;11(6).
  5. 5. Pettigrew LM, De Maeseneer J, Anderson MIP, Essuman A, Kidd MR, Haines A. Primary health care and the Sustainable Development Goals. The Lancet. 2015;386(10009):2119–21.
  6. 6. Brownlee S, Chalkidou K, Doust J, Elshaug AG, Glasziou P, Heath I, et al. Evidence for overuse of medical services around the world. The Lancet. 2017. http://dx.doi.org/10.1016/S0140-6736(16)32585-5.
  7. 7. Glasziou P, Straus S, Brownlee S, Trevena L, Dans L, Guyatt G, et al. Evidence for underuse of effective medical services around the world. The Lancet. 2017.
  8. 8. Saini V, Garcia-Armesto S, Klemperer D, Paris V, Elshaug AG, Brownlee S, et al. Drivers of poor medical care. The Lancet. 2017.
  9. 9. Saini V, Brownlee S, Elshaug AG, Glasziou P, Heath I. Addressing overuse and underuse around the world. The Lancet. 2017.
  10. 10. Burstin H, Leatherman S, Goldmann D. The evolution of healthcare quality measurement in the United States. Journal of Internal Medicine. 2016;279(2):154–9. pmid:26785953
  11. 11. Runciman WB, Coiera EW, Day RO, Hannaford NA, Hibbert PD, Hunt TD, et al. Towards the delivery of appropriate health care in Australia. The Medical Journal of Australia. 2012;197(2):78–81. Epub 2012/07/17. pmid:22794043.
  12. 12. National Institute for Health and Clinical Excellence (NICE). Health and Social Care Directorate Indicators Process Guide 2014. Available from: https://www.nice.org.uk/media/default/Get-involved/Meetings-In-Public/indicator-advisory-committee/ioc-process-guide.pdf. Accessed 6th December 2016.
  13. 13. Australian Commission on Safety and Quality in Health Care. National Safety and Quality Health Service Standards 2017. Available from: https://www.safetyandquality.gov.au/our-work/assessment-to-the-nsqhs-standards/.
  14. 14. Australian Commission on Safety and Quality in Health Care. Australian Atlas of Healthcare Variation 2016. Available from: https://www.safetyandquality.gov.au/atlas/.
  15. 15. Bhatia RS, Levinson W, Shortt S, Pendrith C, Fric-Shamji E, Kallewaard M, et al. Measuring the effect of Choosing Wisely: an integrated framework to assess campaign impact on low-value care. BMJ Quality and Safety. 2015;24(8):523–31. pmid:26092165
  16. 16. Boulkedid R, Sibony O, Goffinet F, Fauconnier A, Branger B, Alberti C. Quality indicators for continuous monitoring to improve maternal and infant health in maternity departments: a modified Delphi survey of an international multidisciplinary panel. PLOS ONE. 2013;8(4).
  17. 17. Boulkedid R, Abdoul H, Loustau M, Sibony O, Alberti C. Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review. PLOS ONE. 2011;6(6):e20476. pmid:21694759
  18. 18. Killaspy H, White S, Wright C, Taylor TL, Turton P, Kallert T, et al. Quality of longer term mental health facilities in Europe: validation of the quality indicator for rehabilitative care against service users’ views. PLOS ONE. 2012;7(6).
  19. 19. Kötter T, Blozik E, Scherer M. Methods for the guideline-based development of quality indicators: a systematic review. Implementation Science. 2012;7(1).
  20. 20. Rhodes A, Moreno RP, Azoulay E, Capuzzo M, Chiche JD, Eddleston J, et al. Prospectively defined indicators to improve the safety and quality of care for critically ill patients: a report from the Task Force on Safety and Quality of the European Society of Intensive Care Medicine (ESICM). Intensive Care Medicine. 2012;38(4):598–605. pmid:22278594
  21. 21. De Roo ML, Miccinesi G, Onwuteaka-Philipsen BD, Van Den Noortgate N, Van den Block L, Bonacchi A, et al. Actual and preferred place of death of home-dwelling patients in four European countries: making sense of quality indicators. PLOS ONE. 2014;9(4).
  22. 22. 2017 NIfHaCE. Health and Social Care Directorate Indicator Process Guide 2017 [23rd May 2018]. Available from: https://www.nice.org.uk/media/default/Get-involved/Meetings-In-Public/indicator-advisory-committee/ioc-process-guide.pdf.
  23. 23. Mangione-Smith R, DeCristofaro AH, Setodji CM, Keesey J, Klein DJ, Adams JL, et al. The quality of ambulatory care delivered to children in the United States. The New England Journal of Medicine. 2007;357(15):1515–23. http://dx.doi.org/10.1056/NEJMsa064637. pmid:17928599.
  24. 24. Bethell CD, Kogan MD, Strickland BB, Schor EL RJ, Newachek PW. A national and state profile of leading health problems and health care quality for US children: key insurance disparities and across-state variations. Academic Pediatrics. 2011;11(2):S22–S33.
  25. 25. Bighelli I OG, Girlanda F, Cipriani A, Becker T, Koesters M, Barbui C.,. Implementation of treatment guidelines for specialist mental health care. Cochrane Database of Systematic Reviews. 2016;(12):Art. No.: CD009780. https://doi.org/10.1002/14651858.CD009780.pub3
  26. 26. Rotter T, Kinsman, L., James, E.L., Machotta, A., Gothe, H., Willis, J., Snow, P. and Kugler, J.,. Clinical pathways: effects on professional practice, patient outcomes, length of stay and hospital costs. The Cochrane Library. 2010;Issue 3.(3):Art. No.: CD006632. https://doi.org/10.1002/14651858.CD006632.pub2
  27. 27. Rutman L, Wright D.R., O'callaghan J., Spencer S., Lion K.C., Kronman M.P., Zhou C. and Mangione-Smith R.,. A Comprehensive Approach to Pediatric Pneumonia: Relationship Between Standardization, Antimicrobial Stewardship, Clinical Testing, and Cost. Journal for Healthcare Quality. 2017;39(4):e59–69. pmid:27811579
  28. 28. Fiander M MJ, Grad R, Pluye P, Hannes K, Labrecque M, Roberts NW, Salzwedel DM, Welch V, Tugwell P.,. Interventions to increase the use of electronic health information by healthcare practitioners to improve clinical practice and patient outcomes. Cochrane Database of Systematic Reviews. 2015;(3):Art. No.: CD004749. https://doi.org/10.1002/14651858.CD004749.pub3
  29. 29. Flodgren G HA, Goulding L, Eccles MP, Grimshaw JM, Leng GC, Shepperd S.,. Tools developed and disseminated by guideline producers to promote the uptake of their guidelines. Cochrane Database of Systematic Reviews. 2016;(8):Art. No.: CD010669. https://doi.org/10.1002/14651858.CD010669.pub2
  30. 30. Flodgren G CL, Mayhew A, Omar O, Pereira CR, Shepperd S.,. Interventions to improve professional adherence to guidelines for prevention of device-related infections. Cochrane Database of Systematic Reviews. 2013;(3):Art. No.: CD006559. https://doi.org/10.1002/14651858.CD006559.pub2
  31. 31. Giguère A LF, Grimshaw J, Turcotte S, Fiander M, Grudniewicz A, Makosso-Kallyth S, Wolf FM, Farmer AP, Gagnon MP.,. Printed educational materials: effects on professional practice and healthcare outcomes. Cochrane Database of Systematic Reviews. 2012;(10). https://doi.org/10.1002/14651858.CD004398.pub3
  32. 32. Lion KC, Wright D.R., Spencer S., Zhou C., Del Beccaro M. and Mangione-Smith R.,. Standardized clinical pathways for hospitalized children and outcomes. Pediatrics,. 2016. pmid:27002007
  33. 33. Akenroye AT, Baskin M.N., Samnaliev M. and Stack A.M.,. Impact of a bronchiolitis guideline on ED resource use and cost: a segmented time-series analysis. Pediatrics,. 2014;133(1):e227–e34. pmid:24324000
  34. 34. Dayal AaA, F.,. The effect of implementation of standardized, evidence-based order sets on efficiency and quality measures for pediatric respiratory illnesses in a community hospital. Hospital pediatrics,. 2015;5(12):624–9. pmid:26596964
  35. 35. Bryan MA, Desai A.D., Wilson L., Wright D.R. and Mangione-Smith R.,. Association of bronchiolitis clinical pathway adherence with length of stay and costs. Pediatrics,. 2017;139(3).
  36. 36. GmbH AIfAQIaRiHC. German hospital quality report Göttingen:: AQUA, 2014.
  37. 37. (ACHS) ACoHS. Australasian Clinical Indicator Report: 2008–2015. Sydney, Australia: Australian Council on Healthcare Standards (ACHS) 2016.
  38. 38. Doran T, Kontopantelis E., Valderas J.M., Campbell S., Roland M., Salisbury C. and Reeves D.,. Effect of financial incentives on incentivised and non-incentivised clinical activities: longitudinal analysis of data from the UK Quality and Outcomes Framework. British Medical Journal (BMJ). 2011;342.
  39. 39. Li Z, Wang C., Zhao X., Liu L., Wang C., Li H., Shen H., Liang L., Bettger J., Yang Q. and Wang D.,. Substantial progress yet significant opportunity for improvement in stroke care in China. Stroke. 2016;47(11):2843–9. pmid:27758941
  40. 40. SALAR Socialstyrelsen Swedish Association of Local Authorities and Regions. Quality and efficiency in Swedish health care. Regional comparisons. Stockholm: Swedish National Board of Health and Welfare, 2012.
  41. 41. Van der Wees PJ, Nijhuis-van der Sanden M.W., van Ginneken E., Ayanian J.Z., Schneider E.C. and Westert G.P.,. Governing healthcare through performance measurement in Massachusetts and the Netherlands. Health Policy. 2014;116(1):18–26. pmid:24138729
  42. 42. Campbell SM, Braspenning J., Hutchinson A. and Marshall M.,. Research methods used in developing and applying quality indicators in primary care. Qual Saf Health Care,. 2002;11(4):358–64. pmid:12468698
  43. 43. Mainz J. Defining and classifying clinical indicators for quality improvement. International Journal for Quality in Health Care. 2003;15(6):523–30. pmid:14660535
  44. 44. Mainz J. Developing evidence-based clinical indicators: a state of the art methods primer. International Journal for Quality in Health Care. 2003;15(supplement 1):i5–i11.
  45. 45. Choong MK, Tsafnat G., Hibbert P., Runciman W.B. and Coiera E.,. Linking clinical quality indicators to research evidence-a case study in asthma management for children. BMC Health Services Research. 2017;17(1).
  46. 46. Coiera E, Choong M.K., Tsafnat G., Hibbert P. and Runciman W.B.,. Linking quality indicators to clinical trials: an automated approach. International Journal for Quality in Health Care. 2017;29(4):571–8. pmid:28651340
  47. 47. McColl A RP, Gabbay J, et al.,. Performance indicators for primary care groups: an evidence-based approach. BMJ. 1998;317:1354–60. pmid:9812935
  48. 48. Hearnshaw HM, Harker R.M., Cheater F.M., Baker R.H. and Grimshaw G.M.,. Expert consensus on the desirable characteristics of review criteria for improvement of health quality,. Quality in Health Care,. 2001;10(3):173–8. pmid:11533425
  49. 49. McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al. The quality of health care delivered to adults in the United States. New England Journal of Medicine. 2003;348 (26):2635–45. http://dx.doi.org/10.1056/NEJMsa022615. pmid:12826639.
  50. 50. Fitch K, Bernstein, S.J., Aguilar, M.S., Burnand, B., LaCalle, J.R. and Lazaro, P.,. The RAND/UCLA Appropriateness Method User’s Manual,. 2001.
  51. 51. Runciman WB, Hunt TD, Hannaford NA, Hibbert PD, Westbrook JI, Coiera EW, et al. CareTrack: assessing the appropriateness of health care delivery in Australia. The Medical Journal of Australia. 2012;197(2):100–5. Epub 2012/07/17. pmid:22794056.
  52. 52. Frijling BD, Spies T.H., Lobo C.M., Hulscher M.E., van Drenth B.B., Braspenning J.C., Prins A., van der Wouden J.C. and Grol R.P.,. Blood pressure control in treated hypertensive patients: clinical performance of general practitioners. Br J Gen Pract,. 2001;51(462):9–14. pmid:11271892
  53. 53. Campbell SM, Shield T., Rogers A. and Gask L.,. How do stakeholder groups vary in a Delphi Technique about primary mental health care and what factors influence their ratings? Quality and Safety in Health Care,. 2004;13(6):428–34. pmid:15576704
  54. 54. Hutchings AaR R.,. A systematic review of factors affecting the judgement produced by formal consensus development methods in health care. Journal of Health Services Research and Policy. 2006;11(3):172–9. pmid:16824265
  55. 55. Marshall MN, Shekelle P.G., McGlynn E.A., Campbell S.M., Brook R.H. and Roland M.O.,. Can health care quality indicators be transferred between countries? Quality and Safety in Health Care,. 2003;12(1):8–12. pmid:12571338
  56. 56. Reeves D, Campbell S.M., Adams J., Shekelle P. and Roland M.,. Comparison of composite measures of clinical quality in primary care. Medical Care. 2007;45:489–96. pmid:17515775
  57. 57. Blozik E, Nothacker M., Bunk T., Szecsenyi J., Ollenschläger, G. and Scherer M.,. Simultaneous development of guidelines and quality indicators–how do guideline groups act? A worldwide survey. International Journal of Health Care Quality Assurance. 2012;25(8):712–9. pmid:23276064
  58. 58. Uphoff EP, Wennekes L., Punt C.J., Grol R.P., Wollersheim H.C., Hermens R.P. and Ottevanger P.B.,. Development of generic quality indicators for patient-centered cancer care by using a RAND modified Delphi method. Cancer Nursing. 2012;35(1):29–37. pmid:21558851
  59. 59. Petrosyan Y, Sahakyan Y., Barnsley J.M., Kuluski K., Liu B. and Wodchis W.P.,. Quality indicators for care of depression in primary care settings: a systematic review. Systematic reviews. 2017;6(1).
  60. 60. Haller G, Stoelwinder J., Myles P.S. and McNeil J., 2009.,. Quality and Safety Indicators in Anesthesia, A Systematic Review. Anesthesiology: The Journal of the American Society of Anesthesiologists,. 2009;110(5):1158–2275.
  61. 61. Smeulers M, Verweij L., Maaskant J.M., de Boer M., Krediet C.P., van Dijkum E.J.N. and Vermeulen H.,. Quality indicators for safe medication preparation and administration: a systematic review. PLOS One. 2015;10(4).
  62. 62. Joling KJ, Van Eenoo L., Vetrano D.L., Smaardijk V.R., Declercq A., Onder G., Van Hout H.P. and Van Der Roest H.G.,. Quality indicators for community care for older people: A systematic review. PLOS One. 2018;13(1).
  63. 63. Madsen MM, Eiset A.H., Mackenhauer J., Odby A., Christiansen C.F., Kurland L. and Kirkegaard H.,. Selection of quality indicators for hospital-based emergency care in Denmark, informed by a modified-Delphi process. Scandinavian journal of trauma, resuscitation and emergency medicine. 2016;24(1).
  64. 64. Hooper TD, Hibbert PD, Mealing N, Wiles LK, Jaffe A, White L, et al. CareTrack Kids—part 2. Assessing the appropriateness of the healthcare delivered to Australian children: study protocol for a retrospective medical record review. BMJ ppen. 2015;5(4):e007749.
  65. 65. Wiles LK, Hooper TD, Hibbert PD, White L, Mealing N, Jaffe A, et al. CareTrack Kids—part 1. Assessing the appropriateness of healthcare delivered to Australian children: study protocol for clinical indicator development. BMJ Open. 2015;5(4):e007748. pmid:25854976
  66. 66. Archambault PM, van de Belt TH, Faber MJ, Plaisance A, Kuziemsky C, Gagnon MP, et al. Collaborative writing applications in healthcare: effects on professional practice and healthcare outcomes. Cochrane Database of Systematic Reviews. 2014;2014:Issue 11. Art. No.: CD011388. https://doi.org/10.1002/14651858.CD011388
  67. 67. Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E. Clinical practice guidelines we can trust: National Academies Press; 2011.
  68. 68. Qaseem A, Forland F, Macbeth F, Ollenschläger G, Phillips S, van der Wees P. Guidelines International Network: toward international standards for clinical practice guidelines. Annals of Internal Medicine. 2012;156(7):525–31. pmid:22473437
  69. 69. Elwyn G, Wieringa S, Greenhalgh T. Clinical encounters in the post-guidelines era. BMJ. 2016;353:i3200. pmid:27352795
  70. 70. Epstein RM, Fiscella K, Lesser CS, Stange KC. Why the nation needs a policy push on patient-centered health care. Health Affairs. 2010;29(8):1489–95. pmid:20679652
  71. 71. Epstein RM, Street RL. The values and value of patient-centered care. The Annals of Family Medicine. 2011;9(2):100–3. pmid:21403134
  72. 72. The King's Fund. Experience-based co-design toolkit 2013 [cited 2016 13 July]. Available from: http://www.kingsfund.org.uk/projects/ebcd.
  73. 73. Hunt TD, Ramanathan SA, Hannaford NA, Hibbert PD, Braithwaite J, Coiera E, et al. CareTrack Australia: assessing the appropriateness of adult healthcare: protocol for a retrospective medical record review. BMJ Open. 2012;2:e000665. Epub 2012/01/21. pmid:22262806; PubMed Central PMCID: PMC3263440.
  74. 74. Brulet A, Llorca G, Letrilliart L. Medical wikis dedicated to clinical practice: a systematic review. Journal of Medical Internet Research. 2015;17(2).
  75. 75. den Breejen EME, Nelen WLDM, Knijnenburg JML, Burgers JS, Hermens RPMG, Kremer JAM. Feasibility of a wiki as a participatory tool for patients in clinical guideline development. Journal of Medical Internet Research. 2012;14(5):e138. pmid:23103790
  76. 76. Elshaug AG, Rosenthal MB, Lavis JN, Brownlee S, Schmidt H, Nagpal S, et al. Levers for addressing medical underuse and overuse: achieving high-value health care. The Lancet. 2017.
  77. 77. National Health & Medical Research Council (NHMRC). Clinical Practice Guidelines portal. 2017. Available from: https://www.nhmrc.gov.au/guidelines/search.
  78. 78. National Institute for Health and Care Excellence (NICE). Guidance and advice list 2017. Available from: https://www.nice.org.uk/guidance/published?type=cg.
  79. 79. Scottish Intercollegiate Guidelines Network (SIGN). Guuidelines by topic 2016. Available from: http://www.sign.ac.uk/guidelines/published/.
  80. 80. Agency for Healthcare Research and Quality (AHRQ). Clinical Guidelines and Recommendations 2017 Available from: https://www.ahrq.gov/professionals/clinicians-providers/guidelines-recommendations/index.html.
  81. 81. New South Wales Clinical Practice Guidelines for Paediatrics. http://www0.health.nsw.gov.au/publichealth/clinicalpolicy/paediatric.asp 2013 [1 April 2013].
  82. 82. Innovation AfC. Paediatric Guidelines and Algorithms. 2017.
  83. 83. National Institute for Health and Clinical Excellence (NICE). Feverish illness in children: assessment and initial management in children younger than 5 years 2013. Available from: https://www.nice.org.uk/guidance/cg160.
  84. 84. National Institute for Health and Care Excellence (NICE). Autism 2014. Available from: https://www.nice.org.uk/guidance/qs51/resources/autism-2098722137029.
  85. 85. National Institute for Health and Care Excellence (NICE). Obesity: identification, assessment and management 2014. Available from: https://www.nice.org.uk/guidance/cg189/resources/obesity-identification-assessment-and-management-35109821097925.
  86. 86. British Thoracic Society, Scottish Intercollegiate Guidelines Network. British guideline on the management of asthma 2016. Available from: https://www.brit-thoracic.org.uk/standards-of-care/guidelines/btssign-british-guideline-on-the-management-of-asthma/.
  87. 87. Geist MR. Using the Delphi method to engage stakeholders: a comparison of two studies. Evaluation and Program Planning. 2010;33:147–54. pmid:19581002
  88. 88. Hasson F, Keeney S. Enhancing rigour in the Delphi technique research. Technological Forecasting and Social Change. 2011;78(9):1695–704.
  89. 89. Franklin K, Hart JK. Idea generation and exploration: benefits and limitations of the policy Delphi research method. Innovative Higher Education. 2007;31:237–46.
  90. 90. Raine R, Sanderson C, Black N. Developing clinical guidelines: a challenge to current methods. British Medical Journal. 2005;331:631–3. pmid:16166137
  91. 91. Williams MJ, Kevat DA, Loff B. Conflict of interest guidelines for clinical guidelines. The Medical Journal of Australia. 2011;195(8):442–5. pmid:22004385
  92. 92. Eccles MP, Grimshaw JM, Shekelle P, Schünemann HJ, Woolf S. Developing clinical practice guidelines: target audiences, identifying topics for guidelines, guideline group composition and functioning and conflicts of interest. Implementation Science. 2012;7(1):1.
  93. 93. Dunn AG, Coiera E, Mandl KD, Bourgeois FT. Conflict of interest disclosure in biomedical research: a review of current practices, biases, and the role of public registries in improving transparency. Research Integrity and Peer Review. 2016;1(1):1.
  94. 94. National Health and Medical Research Council. NHMRC Guideline development and conflict of interest: Identifying and managing conflicts of interest of prospective member and members of NHMRC committees and working groups developing guidelines 2012 [cited 2016 13 July]. Available from: http://www.nhmrc.gov.au/_files_nhmrc/file/guidelines/developers/nh155_coi_policy_120710.pdf.
  95. 95. Hermann RC, Mattke S, Somekh D, Silfverhielm H, Goldner E, Glover G, et al. Quality indicators for international benchmarking of mental health care. International Journal for Quality in Health Care. 2006;18(Suppl. 1):31–8. http://dx.doi.org/10.1093/intqhc/mzl025. pmid:2006457024.
  96. 96. Stang A, Straus S, Crotts J, Johnson D, Guttmann A. Quality Indicators for High Acuity Pediatric Conditions. Pediatrics. 2013;32:752–62.
  97. 97. Ntoburi S, Hutchings A, Sanderson C, Carpenter J, Weber M, English M. Development of paediatric quality of inpatient care indicators for low-income countries—A Delphi study. BMC Pediatrics. 2010;10(1).
  98. 98. Chen AY, Schrager SM, Mangione-Smith R. Quality measures for primary care of complex pediatric patients. Pediatrics. 2012;129(3):433–45. pmid:22331338
  99. 99. Gill PJ, O’Neill B, Rose P, Mant D, Harnden A. Primary care quality indicators for children: measuring quality in UK general practice. British Journal of General Practice. 2014;64(629):e752–e7. pmid:25452539
  100. 100. Baysari MT, Adams K., Lehnbom E.C., Westbrook J.I., Day R.O.,. iPad use at the bedside: a tool for engaging patients in care processes during ward rounds? Internal Medicine Journal,. 2014;44(10):986–90. pmid:24989476
  101. 101. Bell SK, Mejilla R., Anselmo M., Darer J.D., Elmore J.G., Leveille S., Ngo L., Ralston J.D., Delbanco T., Walker J.,. When doctors share visit notes with patients: a study of patient and doctor perceptions of documentation errors, safety opportunities and the patient–doctor relationship. BMJ Quality and Safety. 2017;26(4): 262–70. pmid:27193032
  102. 102. Braithwaite J HP, Jaffe A, White L, Cowell CT, Harris MF, Runciman WB, Hallahan AR, Wheaton G, Williams HM, Murphy E, Molloy CM, Wiles LK, Ramananthan S, Arnolda G, Ting HP, Hooper TD, Szabo N, Wakefield JG, Hughes CF, Schmiede A, Dalton C, Dalton S, Holt J, Donaldson L, Kelley E, Lilford R, Lachman P, Muething S. Quality of Health Care for Children in Australia, 2012–2013. JAMA internal medicine. 2018;319(11):1113–24.
  103. 103. Hibbert P, Stephens J.H., de Wet C., Williams H., Hallahan A., Wheaton G.R., Dalton C., Ting H.P., Arnolda G., Braithwaite J.,. Assessing the Quality of the Management of Tonsillitis among Australian Children: A Population-Based Sample Survey. Otolaryngology–Head and Neck Surgery,. 2018:p.0194599818796137.
  104. 104. Hibbert PD, Hannaford N.A., Hooper T.D., Hindmarsh D.M., Braithwaite J., Ramanathan S.A., Wickham N., Runciman W.B.,. Assessing the appropriateness of prevention and management of venous thromboembolism in Australia: a cross-sectional study. BMJ Open. 2016;6(3):e008618. pmid:26962033
  105. 105. Hooper TD, Hibbert P.D., Hannaford N.A., Jackson N., Hindmarsh D.M., Gordon D.L., Coiera E.C., Runciman W.B.,. Surgical site infection-a population-based study in Australian adults measuring the compliance with and correct timing of appropriate antibiotic prophylaxis. Anaesthesia and Intensive Care,. 2015;43(4):461. pmid:26099757
  106. 106. Ramanathan SA, Hibbert P.D., Maher C.G., Day R.O., Hindmarsh D.M., Hooper T.D., Hannaford N.A., Runciman W.B.,. Caretrack: Toward Appropriate Care for Low Back Pain. Spine. 2017;42(13):E802–E9. pmid:27831965
  107. 107. Rosenbloom ST, Denny J.C., Xu H., Lorenzi N., Stead W.W., Johnson K.B.,. Data from clinical notes: a perspective on the tension between structure and flexible documentation. Journal of the American Medical Informatics Association,. 2011;18(2):181–6. pmid:21233086
  108. 108. Balshem H, Helfand M., Schünemann H.J., Oxman A.D., Kunz R., Brozek J., Vist G.E., Falck-Ytter Y., Meerpohl J., Norris S., Guyatt G.H.,. GRADE guidelines: 3. Rating the quality of evidence. Journal of Clinical Epidemiology,. 2011;64(4):401–6. pmid:21208779
  109. 109. Hall DA, Smith H., Heffernan E., Fackrell K.,. Recruiting and retaining participants in e-Delphi surveys for core outcome set development: Evaluating the COMiT'ID study. PloS one,. 2018;13(7):e0201378. pmid:30059560
  110. 110. Williamson PR AD, Bagley H, Barnes KL, Blazeby JM, Brookes ST, et al. The COMET Handbook: version 1.0. Trials. 2017;18(Suppl 3)(280). PubMed Central PMCID: PMC28681707. pmid:28681707
  111. 111. Guidelines International Network. G-I-N PUBLIC Toolkit: Patient and Public Involvement in Guidelines 2015. Available from: https://www.g-i-n.net/document-store/working-groups-documents/g-i-n-public/toolkit/toolkit-2015.
  112. 112. de Koning J SA, Klazinga NS.,. The Appraisal of Indicators through Research and Evaluation (AIRE) instrument. Amsterdam, The Netherlands: Academic Medical Center, Amsterdam, The Netherlands, 2006.
  113. 113. Raine R, Sanderson C, Hutchings A, Carter S, Larkin K, Black N. An experimental study of determinants of group judgments in clinical guideline development. The Lancet. 2004;364(9432):429–37.
  114. 114. Boivin A, Currie K, Fervers B, Gracia J, James M, Marshall C, et al. Patient and public involvement in clinical guidelines: international experiences and future perspectives. Quality and Safety in Health Care. 2010;19(5):1–4.
  115. 115. Ambresin AE, Bennett K, Patton GC, Sanci LA, Sawyer SM. Assessment of youth-friendly health care: a systematic review of indicators drawn from young people's perspectives. Journal of Adolescent Health. 2013;52(6):670–81. pmid:23701887