Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Evaluating the Quality of Colorectal Cancer Care across the Interface of Healthcare Sectors

  • Sabine Ludt ,

    sabine.ludt@med.uni-heidelberg.de

    Affiliation Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany

  • Elisabeth Urban,

    Affiliation Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany

  • Jörg Eckardt,

    Affiliation AQUA-Institute for Applied Quality Improvement and Research in Healthcare, Goettingen, Germany

  • Stefanie Wache,

    Affiliation AQUA-Institute for Applied Quality Improvement and Research in Healthcare, Goettingen, Germany

  • Björn Broge,

    Affiliation AQUA-Institute for Applied Quality Improvement and Research in Healthcare, Goettingen, Germany

  • Petra Kaufmann-Kolle,

    Affiliation AQUA-Institute for Applied Quality Improvement and Research in Healthcare, Goettingen, Germany

  • Günther Heller,

    Affiliation AQUA-Institute for Applied Quality Improvement and Research in Healthcare, Goettingen, Germany

  • Antje Miksch,

    Affiliation Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany

  • Katharina Glassen,

    Affiliation Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany

  • Katja Hermann,

    Affiliation Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany

  • Regine Bölter,

    Affiliation Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany

  • Dominik Ose,

    Affiliation Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany

  • Stephen M. Campbell,

    Affiliations Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany, Institute for Population Health – Centre for Primary Care, University of Manchester, Manchester, United Kingdom

  • Michel Wensing,

    Affiliations Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany, Scientific Institute for Quality of Healthcare, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands

  • Joachim Szecsenyi

    Affiliations Department of General Practice and Health Services Research, University of Heidelberg Hospital, Heidelberg, Germany, AQUA-Institute for Applied Quality Improvement and Research in Healthcare, Goettingen, Germany

Abstract

Background

Colorectal cancer (CRC) has a high prevalence in western countries. Diagnosis and treatment of CRC is complex and requires multidisciplinary collaboration across the interface of health care sectors. In Germany, a new nationwide established program aims to provide quality information of healthcare delivery across different sectors. Within this context, this study describes the development of a set of quality indicators charting the whole pathway of CRC-care including data specifications that are necessary to operationalize these indicators before practice testing.

Methods

Indicators were developed following a systematic 10 step modified ‘RAND/UCLA Appropriateness Method’ which involved a multidisciplinary panel of thirteen participants. For each indicator in the final set, data specifications relating to sources of quality information, data collection procedures, analysis and feedback were described.

Results

The final indicator set included 52 indicators covering diagnostic procedures (11 indicators), therapeutic management (28 indicators) and follow-up (6 indicators). In addition, 7 indicators represented patient perspectives. Primary surgical tumor resection and pre-operative radiation (rectum carcinoma only) were perceived as most useful tracer procedures initiating quality data collection. To assess the quality of CRC care across sectors, various data sources were identified: medical records, administrative inpatient and outpatient data, sickness-funds billing code systems and patient survey.

Conclusion

In Germany, a set of 52 quality indicators, covering necessary aspects across the interfaces and pathways relevant to CRC-care has been developed. Combining different sectors and sources of health care in quality assessment is an innovative and challenging approach but reflects better the reality of the patient pathway and experience of CRC-care.

Introduction

Colorectal cancer (CRC) is the most common cancer and the second leading cause of cancer related death in Europe [1]. Besides lung and breast cancer it is the third most common cancer worldwide [2]. Annually, there are approximately 70.000 incidences and 30.000 cases of death of both, men and women, related to CRC in Germany [3].

The pathway of care for patients with CRC is complex involving multiple interfaces and multidisciplinary health care providers in inpatient and outpatient settings, relating to diagnostic procedures, therapy decision-making, multimodal treatment and surveillance. Beside the expertise of health care providers, coordination and communication and a good infrastructure for surveillance and follow-up are necessary to provide good quality throughout the entire pathway of care.

Transitions between hospital and ambulatory care are the most vulnerable parts of the delivery of high quality and safe care, especially in fragmented health care structures such as are established in Germany and in the United States [4], [5].

Quality of colorectal cancer care is an important clinical and political issue worldwide [5], [6]. As quality-measurement of processes and outcomes has an important role in many strategies to improve healthcare, much effort has gone into developing and applying quality indicators over the last decades [7]. Quality indicators are defined as measurable elements of practice performance for there is evidence or consensus that they can be used to assess and change the quality of care provided [7]. It is important that quality indicators meet a range of requirements such as relevance, validity, reliability and feasibility relating to the implementation of indicators in routine care [8].

Whereas the development of quality indicators for colorectal cancer care has been reported in many countries [9][13], operationalization of these indicators including specification of data sources, data collection methods and analyzing or practice testing have been rarely described [14]. Previously developed quality indicators and quality improvement initiatives have been focused mainly on surgical treatment reflecting the importance of primary tumor resection as a curative approach within multimodal therapy regimens [6]. However, quality assessment from a comprehensive disease perspective, measuring the quality of CRC care from patient presentation to postoperative surveillance and follow-up throughout the entire pathway of CRC-care, is not yet described [5], [13], [15].

Patient centered care is as an integral part of evaluating health care [16] particularly in cancer care [5], [17]. Previous literature shows that professionals' opinions about high quality care may deviate from patients' perspectives, so it is necessary to involve patients in indicator development [18]. However, many sets of quality indicators do not include measures of patient centeredness or experience.

A wide variety of methodological approaches for developing quality indicators has been reported; however, patient representatives are mostly not included and practice testing of indicators is not always provided during the development process [19]. It is also crucial to test sets of indicators using a testing protocol [20], [21].

Therefore, the aim of this study was twofold: First, to develop a comprehensive set of cross-sectoral quality indicators along the whole pathway of colorectal cancer care including patient representatives; Second, to describe important steps towards practice testing of these indicators such as specification of data management, analyzing and feedback procedures.

Methods

Setting

Germany has a population of about 82 million inhabitants, about 90 percent of the population is covered by statutory insurance (generally under compulsory insurance cover), while private insurance - to which only civil servants, the self-employed and high-earning employees have access - covers about 10 percent of the population. The costs of statutory health insurance are split roughly 50:50 between employers and employees, with the government paying for coverage of welfare recipients [22].

Health spending accounted for 11.6% of the gross domestic product (GDP) in Germany in 2010, more than two percentage points higher than the Organization for Economic Co-operation and Development (OECD) average of 9.5%. Still, health spending as a share of GDP remains much lower in Germany than in the United States (which spent 17.6% of its GDP on health in 2010) [23].

The German Health Care system is characterized by the fragmentation of care structures with rigid financial barriers between ambulatory and hospital care. The almost 250 health insurance funds and their umbrella organizations regulate the system. In the ambulatory sector fund members have the right of a free choice of doctor and can consult a specialist directly [22].

The Federal Joint Committee (G-BA) regulates the healthcare system independently under the supervision of the Ministry of Health. In 2009, the G-BA established a comprehensive program for quality improvement across healthcare sectors in Germany (‘Sektorenübergreifende Qualitätssicherung im Gesundheitswesen’ or ‘SQG’) and commissioned an independent institution, the Institute for Applied Quality Improvement and Research in Health Care (AQUA-Institute) [24] to develop national cross-sectoral quality measures, data collection procedures and analytic procedures to feed-back measurement results to health care providers to stimulate quality improvement [25]. As the SQG quality program is obligatory and is nationwide, health care providers in both sectors are required to record and transfer quality information.

Development process

The study was carried out between January 2010 and December 2011. The AQUA-institute processed this task in collaboration with the ‘Department of General Practice and Health Services Research’ at the University Hospital at Heidelberg. A ten step [25] modified ‘RAND/UCLA Appropriateness Method’ [26] was applied to develop the quality measures. This procedure included a scoping workshop with experts, structured literature search to identify quality indicators, two rounds of panel-ratings, design of measure specifications and the delivery of a final report to be approved by the G-BA (Table 1).

thumbnail
Table 1. Phases and steps in the development and testing of indicators for colorectal cancer care.

https://doi.org/10.1371/journal.pone.0060947.t001

Scoping workshop

Members of medical societies and interest groups involved in the CRC-care process were openly invited to a scoping workshop by post and via a website announcement. 55 experts of various clinical professions such as surgery, gastroenterology, oncology, pathology, family medicine, human genetics, epidemiology, nursing and patient representatives participated in the meeting. Representatives of the federal association of sickness funds and other stakeholders of the German healthcare system reported on quality improvement initiatives. The aim of the workshop was to collect and synthesize knowledge of experts across the CRC healthcare interfaces.

Structured search for indicators

The search consisted of 3 steps: 1) a preliminary search to get an overview about current colorectal cancer care and the situation in Germany (Table S2), 2) the main systematic literature search to identify internationally applied quality indicators (Tables S3, S4, S5, S6) and 3) a search of international agencies and indicator databases to identify quality indicators concerning colorectal cancer care (Table S7).

In a preliminary search, Cochrane Database of Systematic Reviews, guideline databases, Medline and oncology journals were searched for guidelines and systematic reviews on CRC and a final search model for a systematic search in Medline was developed. We identified 28 papers from the ‘Cochrane Colorectal Cancer group’ and 45 international guidelines on CRC including one German evidence-based guideline [3], 1 Health Technology Assessment (HTA)- report and 25 additional papers that highlighted the German perspective.

The structured search was carried out from February to March 2010. We searched MEDLINE® (from 1998 to March 2010) systematically using a predefined search strategy (Tables S3) and identified 4,942 potentially relevant abstracts (Table S4). Additionally, 41 relevant publications were found by hand search of peer reviewed oncology journals. Paired reviewers (researchers including physicians and methodologists) screened the abstracts independently using predefined inclusion and exclusion criteria (Table S5) and ordered the full-text when either reviewer selected it for inclusion. The full-texts were abstracted for quality indicators using self-developed and piloted abstraction forms. Finally, 99 publications (Table S6) met inclusion criteria, of which 289 quality indicators for the diagnosis and treatment of colorectal cancer were extracted (Figure 1).

A structured search of indicator agencies worldwide (73 previously identified agencies were retrieved for indicators) identified 419 quality indicators (Table S7) in various dimensions [27]. Indicators were also extracted from national grey literature, such as professional-society documents (German Cancer Society) or government reports. After removing duplicates, 52 quality indicators remained (Figure 2).

thumbnail
Figure 2. No of quality indicators identified by systematic search (allocated to the OECD quality model dimensions) [24].

https://doi.org/10.1371/journal.pone.0060947.g002

Preparing candidate indicators for expert panel ratings

The resulting 341 quality indicators composed of 289 indicators from systematic literature search and 52 indicators from indicator agencies, were translated into German and allocated to the clinical dimensions ‘diagnosis’, ‘therapy’, ’management/coordination of care’, ‘patient perspective’ and ‘outcome’ as appropriate. The indicators were drafted to 210 self-developed standardized templates providing original indicator wordings (English mostly) and German translations. Indicators that differed from each other only slightly in wording were subsumed to one single indicator template providing the various original wordings. Additionally, templates included categories for a short description of the indicator, the definition of numerator and denominator, inclusion and exclusion criteria, indicator level targets and the type of the quality indicator relating to structure, process or outcome. Sources of the indicators and evidence from literature or guidelines were also provided. Finally, 210 templates summarized 341 quality indicators in the clinical dimensions diagnosis (22), therapy (104), management and coordination of care (47), patient perspective (31) and outcome (6).

Expert panel ratings

The expert panel ratings were carried out between June and September 2010. All medical societies involved in diagnosis and treatment of CRC were asked to inform their members to apply for the panel. Furthermore, an invitation was announced at the scoping workshop in Heidelberg and also provided openly via internet [24]. From 77 applications for the panel, 14 experts were selected according to predefined criteria aligned to including the most relevant disciplines in the pathway of CRC care. If experts were equally qualified, the panel candidate was drawn randomly by lot. Finally, these multidisciplinary experts from inpatient and outpatient care were chosen for the panel: One family physician, one gastroenterologist, two clinical oncologists, one psychotherapist/psycho-oncologist, three visceral surgeons, one pathologist, one representative of a regional consortium/working group for quality assurance, one expert for quality assurance in oncology, and one representative of a regional cancer registry. Additionally, two patient representatives nominated by the G-BA completed the panel membership. As no radiation oncologist applied to the panel, a radiation oncologist was nominated by the German Society of Radiation Oncology (DEGRO) to give advice to the panel. All panel members had to declare conflicts of interests in a written form.

The panel rating was performed in two rounds consisting of a postal rating and a face-to-face panel meeting in each round. The voting of all members of the panel was counted equally.

In round one, panelists rated the content validity in terms of relevance on a 9-point integer scale with a score of one (not at all relevant) for the first answer up to nine (very relevant). In the postal ratings, the quality indicators were rated by each expert of the panel at home/office and sent back anonymously by return envelope. The indicator templates provided the opportunity to give comments to adapt indicators if necessary. At the two-day panel meeting, the results of the postal ratings from round one were presented and discussed. If necessary, the quality indicators were modified to align them to recommendations of the German evidence based guideline [3] or to the German health care system. After discussion, each quality indicator was re-rated.

In round two, the same procedures were applied to rate the feasibility of the indicators.

Analyses of the ratings were based on the ‘RAND/UCLA’ Appropriateness Method [26]. For each quality indicator overall panel median scores and the level of agreements within the panel were calculated. Median scores of 7–9 and consensus of more than 75% were defined as “agreement”, the quality indicator was classified as valid. A median score of 1–3 with an agreement of more than 75% was defined as not valid. In the feasibility rating, a quality indicator with a median score of more than 4 was defined as feasible. All statistical analyses were performed with SPSS-Statistics/PASW (Predictive Analysis Soft Ware) Vs 18.

Designing of measure specifications

For each indicator, data sources and required data fields were specified including trigger criteria to identify patients for the quality assurance program, data fields to create the indicator and data fields that were required for risk adjustment where appropriate. Trigger criteria were derived from the International classification of Diseases (ICD-10 GM) – codes that were available in the ambulatory and in the hospital sector, and from process codes of the hospital (OPS codes) and ambulatory (fee schedule items) reimbursement systems. Furthermore, data fields were described that had to be additionally recorded for quality- assurance purposes. Specifications for data flow and analyses were provided. To create indicators requiring data from both the ambulatory and the hospital sector, it had to be ensured that all data would come together in a separate trusted center that had to assign this information to a certain patient and to forward these data anonymously to the AQUA-institute for analysis. To collect data relating to the indicators on patients' perspectives, procedures for random selection of patients and implementation of patient surveys during the therapy course were described. Peer review auditing as an innovative concept of data collection methods was described for indicators reflecting the quality of reports that summarize important clinical findings such as the colonoscopy-report or the surgery-report. Data specifications and data flow models were discussed with the G-BA and revised before being included in a final report.

Results

Final set of quality indicators

The final set included 52 quality indicators (Table 2 and Table S1) representing significant in- and outpatient procedures along the entire pathway of CRC care. The set of indicators described pre-therapeutic diagnostic procedures (11 indicators), therapeutic procedures (28 indicators), surveillance (2 indicators) and outcomes (4 indicators). Furthermore, 7 indicators were related to patient specific issues (Table 2).

thumbnail
Table 2. Quality indicators of CRC care included in the final set (for detailed description see Table S1).

https://doi.org/10.1371/journal.pone.0060947.t002

Diagnostic procedures

In line with the German evidence-based guideline [3], all relevant diagnostic procedures such as colonoscopy, biopsy or imaging techniques were covered by the indicators. Most of these indicators were process indicators, measuring whether a diagnostic procedure has been performed such as a pre-therapeutic tumor biopsy (indicator 7). Additionally, the diagnostic indicator set included technical measures pertaining to specific details of a procedure such as the availability of standardized colonoscopy reports indicating not only performance but also the quality of colonoscopy-performance and report (indicator 5). The availability of multidisciplinary tumor boards for therapy decision-making, as an example for a cross-sectoral indicator, was also included (indicator 1 and 2).

Therapeutic indicators

Therapeutic procedures comprised surgery procedures, pre- and post-therapeutic management and the application of radiotherapy and chemotherapy. Most indicators concerned surgical processes, reflecting the importance of colorectal resection for cancer as a curative approach. Beside process measures, technical measures such as the delivery and quality of a total mesorectal excision (TME) in patients with RC were also included (indicators 22 and 32). The assessment of the pre- and postoperative functional bowel status (indicators 12 and 35) reflected patient relevant issues. The quality of staging procedures was described by two indicators including the quality of the pathology report according to the standards of the ‘German Pathology Society’. Radiotherapy was represented by two indicators (indicators 16 and 17), with one describing a technical measure providing information about the quality of radiotherapy performance (indicator 17). Three general indicators were related to chemotherapy (indicators 37, 38, 39). No technical measure for chemotherapy was identified.

Follow-up indicators were related to surveillance colonoscopy (indicator 42) and other diagnostic procedures recommended for the early detection of disease recurrence (indicator 43). Outcome indicators were related to mortality rates (indicator 50 and 52), disease recurrence for RC (indicator 51) and the quality of life as a patient related outcome indicator (indicator 53).

Across these processes, 7 indicators representing patients' perspectives completed the set: These indicators were related to patient information, shared decision-making, support, pain management and follow-up management.

Excluded indicators

Indicators were excluded during the panel rounds for several reasons: Some indicators were rated not valid as they were deemed to be not specific enough for CRC, such as indicators addressing colorectal surgery in general. Other indicators were seen as duplicates of other included indicators and therefore redundant. Furthermore, indicators were excluded if their measurement was assumed to be very resource intensive such as the proportion of RC-patients in appropriate UICC-stages receiving adjuvant chemotherapy without having received neo-adjuvant radio (chemo) therapy before cancer resection. Indicators reflecting patients' perspectives were suggested to be difficult to assess and to feed-back to providers as indicator results were not unambiguously attributable to a specific healthcare provider.

Indicator 34, concerning pain management, was excluded by AQUA after the panel rounds, as this issue was already addressed in a generic part of the patient survey within all SQG programs [24].

Operationalization of indicators

To identify eligible patients for inclusion in the CRC quality assurance program, two tracer events were defined: 1) primary tumor resection delivered in hospital and 2) neo-adjuvant chemotherapy that can be provided either ambulatory or in hospital. These tracer procedures were used to index eligible patients for follow up. Diagnostic procedures and findings prior to these tracer events had to be recorded retrospectively in the medical charts.

We identified four sources of quality information to be used in combination to create the quality indicators: first, patient charts - requiring additional documentation on clinical patient information, such as co-morbidities; second, administrative and reimbursement inpatient data (ICD codes and OPS-codes) and outpatient data (ICD codes and fee schedule items) to collect information on diagnoses and procedures; third; administrative data from sickness-funds, mainly to collect information on vital status and fourth, patient survey data to assess patients' perspectives. Data abstraction protocols were developed. Furthermore, a method to arrange patient surveys including self-developed questionnaires and validated instruments for the assessment of quality of life and functional bowel status were developed.

Feedback procedures

Two groups of feedback procedures were identified. First, indicator results that could be ascribed unambiguously to healthcare providers or facilities were targeted to be embedded in established feedback procedures within the German SQG-program [24]. These procedures included the provision of a benchmarking quality report and a ‘structured dialogue’ with healthcare providers achieving poor results to identify quality problems.

Providing feedback for the second group of indicators, which reflected cross-sectoral multidisciplinary coordination and shared responsibilities such as the time period up to starting chemotherapy after surgical resection, was more complex. For this group of indicators (area indicators) it was proposed to address feedback, not to single healthcare providers or facilities but, to define reference regions such as diversion areas of hospitals and to provide feedback within multidisciplinary quality circles to promote quality improvement.

Final report

The final report comprised the detailed description of the methodology, the final set of quality indicators and data abstraction forms for each indicator according to established data sources and structures within the German healthcare system. Furthermore, alternative implementation methods were proposed and discussed in order to reduce data collection time and effort, such as peer review auditing and the implementation of specific reimbursement codes (OPS-codes or fee schedule items). The final report was approved by the G-BA in December 2011.

Discussion

During this study, a set of 52 quality indicators was developed to reflect the entire pathway of colorectal cancer care. Data specifications for the final set of indicators were developed, including various methods of data collection and analyzing and options for feeding-back measurement results to healthcare providers and facilities.

The decision of the G-BA to include the clinical domain CRC in the nationwide mandatory SQG program [24] reflects the necessity to provide information of the quality of CRC care as one of the most prevalent cancer entities nationwide [28]. In large international studies concerning cancer survival, it has been reported that data delivered form Germany covered only one to four percent of the national population [29] and the ‘international Agency for Research on Cancer’ assessed German cancer incidence rates as not valid [1].

Quality indicator development

Indicators were developed using the ‘RAND/UCLA Appropriateness Method’ [26] that systematically combines scientific evidence and expert opinion and is proven to be a scientific sound method of indicator development [7]. Although there were disagreements between various disciplines within the multidisciplinary panel, it was possible to agree a final set of 52 indicators out of 210 candidate indicators presented to the panel. As demonstrated in previous literature, the panel composition of multidisciplinary medical professionals and patient representatives stimulated interaction during the consensus meetings and led to a more comprehensive set of indicators [17], [30].

As most quality indicators identified in the systematic search were developed in other countries, they could not be transferred directly between countries but had to be adapted to the German healthcare system and to the recommendations of the German S3-guideline on CRC during the panel process [31], [32]. As candidate indicators were presented in templates that included (where available) the underlying evidence of the indicator, indicators that were supported by high-level evidence-based guideline recommendations, were generally agreed unanimously by the panel members. The various medical disciplines involved in the care process of CRC were addressed comprehensively in the final set of indicators. However, clinical oncologists complained of the imbalance between the number of indicators representing surgery compared to chemotherapy and claimed a broader focus on chemotherapy indicators. Although chemotherapy is an essential component of multimodal therapy regimes for many patients with CRC, surgical therapy as a curative approach is related to a broader patient sample (denominator): According to the German multi-center study, over 90% of CRC-patients receive surgical treatment [33]. Measurement of chemotherapy indicators is more challenging as more quality information is needed to define the appropriate sample (denominator), as chemotherapy is suitable for only a portion of all patients with CRC and additionally, some of these patients are either unable to tolerate chemotherapy or refuse it. Even more information is needed for measuring the application of special chemotherapy agents or to reflect technical measures describing details of chemotherapy administration within a variety of chemotherapy-protocols and a variety of individual response. These limits led to the conclusion that the measurement of such indicators is not feasible [34].

It has been questioned whether participants of indicator-rating panels, usually expert clinicians, are qualified to rate the feasibility of indicators addressing operational issues of indicator implementation [35]. It seems to be difficult for panelists to assess the time and effort of data collection procedures necessary to operationalize an indicator [35]. Assessment of feasibility may be beyond the scope of clinical experts, as these are generally not experts for data collection and analyzes [8]. Therefore, ratings of experts can only provide a first appraisal concerning the feasibility of indicators, that has to be confirmed by data collection specialists and tested in practice using a validated testing protocol [20], [21].

Within the SQG program, special emphasis was placed on patients' perspectives, resulting in the participation of two patient-representatives in the multidisciplinary panel and the development of seven indicators reflecting patients' perspectives in particular. This was quite innovative, as it has been shown that patient participation during indicator development is extremely uncommon [19]. However, as patients' perspectives of quality assessment and medically based measures of quality may be different from each other [36], the inclusion of two patient representatives may not be sufficient to reflect patients' view comprehensively. As similar problems were observed in other SQG procedures, separate focus groups with patients will be established in future SQG procedures to supplement SQG-program methods [24], [25].

Indicator data specification

In comparison to the role of panel ratings in identifying consensus and developing valid indicators, it is more challenging to specify how to measure agreed quality indicators [17]; particularly when measurement requires the combination of data sources from different healthcare sectors with variable data availability. Remuneration systems differ considerably between in- and outpatient settings. Minor problems were caused by inpatient data collection, where quality information could be derived from routine data including coded information on diagnoses based on the International classification of Diseases (ICD-10 GM) and also coded diagnostic and therapeutic procedures (OPS-codes). In the German ambulatory sector, however, OPS-codes were not available. Information on diagnoses (ICD-10 GM) and procedures (fee schedule items) could be derived from information systems that are used for clinical and administrative purposes by healthcare providers [25].

Quality measurement of follow-up procedures is resource intensive. First experiences in Germany concerning follow-up measurement were made with liver transplant donees whose follow up rates were only 67.5% [37]. In our study, it was proposed to use administrative data from sickness funds as the source for information on patients' vital status. As about ninety percent of the German population is covered by the statutory health insurance system (SHI), these data represent an innovative method for measuring risk adjusted long-term outcomes [37], [38].

Practice testing prior to usage of quality indicators is an important step to assess them against the required attributes of quality indicators such as validity, reliability, feasibility or sensibility to change [7]. As demonstrated in previous literature, only 10 to 20% of quality indicators developed for different clinical conditions have been measured during practice tests [8]. Although protocols for indicator practice testing are available [20], [21], technical specifications of measurement are sparse; even though this is an important step in preparing practice tests [39]. This includes also taking confounding factors into account due to case mix in hospitals and socio-demographic variables [8]. Adjustment for socio-demographic variables and case mix is very important for reliable interpretation of indicator results. Otherwise treatment of high-risk groups may be avoided by health care providers [40].

Although we were able to specify procedures for operationalizing all 52 indicators in the final set, with a preference for using routinely collected administrative and clinical data, a large amount of data remained to be collected by additional data recording for quality assurance purposes. This was considered by the AQUA-institute to reduce the feasibility of indicator implementation in routine settings. To reduce documentation efforts, alternative designs for data collection were described such as peer review audits. This method was also suggested to be superior concerning reliability, as complication rates for instance should not depend on coding of the treating physician because this is prone for manipulation. The implementation of new specific process codes (OPS-codes or fee schedule items) was proposed as another alternative data collection method that has to be considered and decided by the G-BA.

Incorporation of results in established quality improvement strategies

Measurement of quality indicators is not an end in itself; it is the basis for developing and evaluating quality improvement strategies. In Germany, several quality initiatives are established to improve the quality of cancer care: The Ministry of Health established the National Cancer Plan in 2009 with the main focus on harmonizing treatment across the 16 disparate states [41]. Other quality improvement initiatives on CRC care were focused mainly on the hospital sector [25] or on colorectal surgery [33]. The challenge is now to integrate the SQG-program into existing initiatives, for instance the national cancer registries, to avoid redundant data entry. Therefore, working groups have been established to harmonize interests of various stakeholders and to discuss requirements with the G-BA.

Strengths and limitations

The strength of the ‘RAND/UCLA Appropriateness Method’ is the combination of evidence from literature and experts' opinions that enables to provide a set of well-founded quality indicators [7]. The multidisciplinary composition of the panel and the intensive discussion during the panel meetings resulted in a set of indicators reflecting the entire pathway of care [30]. The nationwide, politically supported SQG-program has the potential to provide nationwide valid quality data on colorectal cancer and to link ambulatory and hospital services. Additionally, the methods developed in this study can provide case mix adjusted quality information protecting physicians and hospitals from an unjust appraisal of their performance.

Limitations of this study are that the harmonization of data entry and agreement with the G-BA are time consuming processes that will delay the further development of these indicators such as feasibility field testing and the roll out of the program.

Conclusions

In Germany, a set of 52 quality indicators, covering all relevant aspects of the CRC care processes has been developed that address cross-sectoral interfaces. Combining different sectors and sources of health care in quality assessment is an innovative and challenging approach but reflects better the reality of patient experiences of CRC, rather than sets of indicators that address individual sectors (ambulatory or hospital) in isolation. It reflects the interdisciplinary coordination that embodies CRC-care and will help address quality improvement across health care interfaces.

Supporting Information

Table S1.

Description of quality indicators for colorectal cancer care.

https://doi.org/10.1371/journal.pone.0060947.s001

(DOCX)

Table S3.

Systematic literature search – search terms and procedures.

https://doi.org/10.1371/journal.pone.0060947.s003

(DOCX)

Table S4.

Systematic literature search – hits.

https://doi.org/10.1371/journal.pone.0060947.s004

(DOCX)

Table S5.

Systematic literature search - inclusion and exclusion criteria.

https://doi.org/10.1371/journal.pone.0060947.s005

(DOCX)

Table S6.

Systematic literature search – included papers.

https://doi.org/10.1371/journal.pone.0060947.s006

(DOCX)

Acknowledgments

The following individuals were participants of the multidisciplinary rating panel: Becker H, P. Engeser P, Hass M, Hermanek P, Keller M, Kleeberg U, Klinkhammer-Schalke M, Köhne CH, Merkel S, Pox C, Raab HR, Richter J, Schulz K, Tannapfel A.

Author Contributions

Conceived and designed the experiments: SL JS JE BB. Performed the experiments: SL EU SW JE AM KG KH RB DO. Analyzed the data: SL EU SW JE. Contributed reagents/materials/analysis tools: SL MW SC GH PKK BB. Wrote the paper: SL.

References

  1. 1. Ferlay J, Parkin DM, Steliarova-Foucher E (2010) Estimates of cancer incidence and mortality in Europe in 2008. Eur J Cancer 46: 765–781.
  2. 2. Ferlay J, Shin HR, Bray F, Forman D, Mathers C, et al. (2010) Estimates of worldwide burden of cancer in 2008: GLOBOCAN 2008. Int J Cancer 127: 2893–2917.
  3. 3. Schmiegel W, Pox C, Adler G, Fleig W, Folsch UR, et al. (2004) [S3-Guidelines Conference “Colorectal Carcinoma” 2004]. [German]. Z Gastroenterol 42: 1129–77.
  4. 4. Coleman EA, Berenson RA (2004) Lost in Transition: Challenges and Opportunities for Improving the Quality of Transitional Care. Ann Intern Med 141: 533–536.
  5. 5. Spinks T, Albright HW, Feeley TW, Walters R, Burke TW, et al. (2012) Ensuring quality cancer care. Cancer 118: 2571–2582.
  6. 6. van Gijn W, van de Velde CJH (2010) Improving quality of cancer care through surgical audit. Eur J Surg Oncol 36 Suppl 1S23–S26.
  7. 7. Campbell SM, Braspenning J, Hutchinson A, Marshall M (2002) Research methods used in developing and applying quality indicators in primary care. Qual Saf Health Care 11: 358–364.
  8. 8. Wollersheim H, Hermens R, Hulscher M, Braspenning J, Ouwens M, et al. (2007) Clinical indicators: development and applications. Neth J Med 65: 15–22.
  9. 9. McGory ML, Shekelle PG, Ko CY (2006) Development of quality indicators for patients undergoing colorectal cancer surgery. J Natl Cancer Inst 98: 1623–1633.
  10. 10. Malin JL, Asch SM, Kerr EA, McGlynn EA (2000) Evaluating the quality of cancer care: development of cancer quality indicators for a global quality assessment tool. Cancer 88: 701–707.
  11. 11. Desch CE, McNiff KK, Schneider EC, Schrag D, McClure J, et al. (2008) American Society of Clinical Oncology/National Comprehensive Cancer Network Quality Measures. J Clin Oncol 26: 3631–3637.
  12. 12. Dixon E, Armstrong C, Maddern G, Sutherland F, Hemming A, et al. (2009) Development of quality indicators of care for patients undergoing hepatic resection for metastatic colorectal cancer using a Delphi process. J Surg Res 156: 32–38.
  13. 13. Patwardhan MB, Samsa GP, McCrory DC, Fisher DA, Mantyh CR, et al.. (2006) Cancer care quality measures: diagnosis and treatment of colorectal cancer. Evid Rep Technol Assess (Full Rep): 1–116.
  14. 14. Arden M (2011) Measuring quality in colorectal surgery. BMJ 343.
  15. 15. Demetter P, Ceelen W, Danse E, Haustermans K, Jouret-Mourin A, et al. (2011) Quality of care indicators in rectal cancer. Acta Gastroenterol Belg 74: 445–450.
  16. 16. Institute of Medicine (2001) Crossing the Quality Chasm: a new Health System for the 21st Century. Washington; National Academy Press. 1–337.
  17. 17. Uphoff EP, Wennekes L, Punt CJ, Grol R, Wollersheim HC, et al. (2012) Development of Generic Quality Indicators for Patient-Centered Cancer Care by Using a RAND Modified Delphi Method. Cancer Nurs 35: 29–37.
  18. 18. Hermens RP, Ouwens MM, Vonk-Okhuijsen SY, van der WY, Tjan-Heijnen VC, et al. (2006) Development of quality indicators for diagnosis and treatment of patients with non-small cell lung cancer: a first step toward implementing a multidisciplinary, evidence-based guideline. Lung Cancer 54: 117–124.
  19. 19. Kotter T, Blozik E, Scherer M (2012) Methods for the guideline-based development of quality indicators – a systematic review. Implementation Sci 7: 21.
  20. 20. Campbell S, Kontopantelis E, Hannon K, Burke M, Barber A, et al. (2011) Framework and indicator testing protocol for developing and piloting quality indicators for the UK quality and outcomes framework. BMC Fam Pract 12: 85.
  21. 21. American Medical Association (2010) Measure Testing Protocol for Physician Consortium for Performance Improvement (PCPI) Performance Measures. Available: http://www.ama-assn.org/resources/doc/cqi/pcpi-testing-protocol.pdf. Accessed 2012 Jun 5.
  22. 22. Gerlinger T (2010) Health Care Reform in Germany. German Policy Studies 6: 107–142.
  23. 23. The Organisation for Economic Co-operation and Development (2012) OECD Health Data 2012. Available: http://www.oecd.org/health/healthpoliciesanddata/oecdhealthdata2012.htm. Accessed 2013 Feb 5.
  24. 24. AQUA-Institute (2012). Cross-sectoral quality in health care. AQUA-Institute Available: http://www.sqg.de/startseite. Accessed 2012 Aug 6.
  25. 25. Szecsenyi J, Broge B, Eckhardt J, Heller G, Kaufmann-Kolle P, et al. (2012) Tearing down walls: opening the border between hospital and ambulatory care for quality improvement in Germany. Int J Qual Health Care 24: 101–104.
  26. 26. Fitch K, Bernstein SJ, Aguilar MD (2000) The RAND/UCLA Appropriateness Method User's Manual. Santa Monica/California: RAND Cooperation.
  27. 27. Arah OA, Westert GP, Hurst J, Klazinga NS (2006) A conceptual framework for the OECD Health Care Quality Indicators Project. Int J Qual Health Care 18 Suppl 15–13.
  28. 28. der Schulenburg JM, Prenzler A, Schurer W (2010) Cancer management and reimbursement aspects in Germany: an overview demonstrated by the case of colorectal cancer. Eur J Health Econ 10: 21–26.
  29. 29. Coleman MP, Quaresma M, Berrino F, Lutz JM, De Angelis R, et al. (2008) Cancer survival in five continents: a worldwide population-based study (CONCORD). The Lancet Oncol 9: 730–756.
  30. 30. Campbell SM, Shield T, Rogers A, Gask L (2004) How do stakeholder groups vary in a Delphi technique about primary mental health care and what factors influence their ratings? Qual Saf Health Care 13: 428–434.
  31. 31. Marshall MN, Shekelle PG, McGlynn EA, Campbell S, Brook RH, et al. (2003) Can health care quality indicators be transferred between countries? Qual Saf Health Care 12: 8–12.
  32. 32. Steel N, Melzer D, Shekelle PG, Wenger NS, Forsyth D, et al. (2004) Developing quality indicators for older adults: transfer from the USA to the UK is feasible. Qual Saf Health Care 13: 260–264.
  33. 33. Kube R, Ptok H, Wolff S, Lippert H, Gastinger I (2009) Quality of medical care in colorectal cancer in Germany. Onkologie 32: 25–29.
  34. 34. Prosnitz RG, Patwardhan MB, Samsa GP, Mantyh CR, Fisher DA, et al. (2006) Quality measures for the use of adjuvant chemotherapy and radiation therapy in patients with colorectal cancer: a systematic review. Cancer 107: 2352–2360.
  35. 35. Pena A, Virk SS, Shewchuk RM, Allison JJ, Dale Williams O, et al. (2010) Validity versus feasibility for quality of care indicators: expert panel results from the MI-Plus study. Int J Qual Health Care 22: 201–209.
  36. 36. Elwyn G, Buetow S, Hibbard J, Wensing M (2007) Measuring quality through performance. Respecting the subjective: quality measurement from the patient's perspective. BMJ 335: 1021–1022.
  37. 37. Busse R, Nimptsch U, Mansky T (2009) Measuring, Monitoring, And Managing Quality In Germany's Hospitals. Health Aff (Millwood) 28: w294–w304.
  38. 38. Heller G, Schnell R (2007) Hospital mortality risk adjustment using claims data. JAMA 297: 1983–1984.
  39. 39. Rubin HR, Pronovost P, Diette GB (2001) Methodology Matters. From a process of care to a measure: the development and testing of a quality indicator. Int J Qual Health Care 13: 489–496.
  40. 40. Werner R, Asch D (2005) The Unintended Consequences of Publicly Reporting Quality Information. JAMA 293: 1239–1244.
  41. 41. Dente K (2009) Cancer meeting in Germany highlights need for national registry. Nat Med 15: 825.