Abstract
Administrative data play an important role in performance monitoring of healthcare providers. Nonetheless, little attention has been given so far to the emergency department (ED) evaluation. In addition, most of existing research focuses on a single core ED function, such as treatment or triage, thus providing a limited picture of performance. The goal of this study is to harness the value of routinely produced records proposing a framework for multidimensional performance evaluation of EDs able to support internal decision stakeholders in managing operations. Starting with the overview of administrative data, and the definition of the desired framework’s characteristics from the perspective of decision stakeholders, a review of the academic literature on ED performance measures and indicators is conducted. A performance measurement framework is designed using 224 ED performance metrics (measures and indicators) satisfying established selection criteria. Real-world feedback on the framework is obtained through expert interviews. Metrics in the proposed ED performance measurement framework are arranged along three dimensions: performance (quality of care, time-efficiency, throughput), analysis unit (physician, disease etc.), and time-period (quarter, year, etc.). The framework has been judged as “clear and intuitive”, “useful for planning”, able to “reveal inefficiencies in care process” and “transform existing data into decision support information” by the key ED decision stakeholders of a teaching hospital. Administrative data can be a new cornerstone for health care operation management. A framework of ED-specific indicators based on administrative data enables multi-dimensional performance assessment in a timely and cost-effective manner, an essential requirement for nowadays resource-constrained hospitals. Moreover, such a framework can support different stakeholders’ decision making as it allows the creation of a customized metrics sets for performance analysis with the desired granularity.
Figures
Citation: Soldatenkova A, Calabrese A, Levialdi Ghiron N, Tiburzi L (2023) Emergency department performance assessment using administrative data: A managerial framework. PLoS ONE 18(11): e0293401. https://doi.org/10.1371/journal.pone.0293401
Editor: Gokhan Agac, Sakarya University of Applied Sciences, TURKEY
Received: May 5, 2023; Accepted: October 11, 2023; Published: November 2, 2023
Copyright: © 2023 Soldatenkova et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the manuscript and its Supporting Information files (S1 Table and S2 Table).
Funding: The author(s) received no specific funding for this work.
Competing interests: The authors have declared that no competing interests exist.
Introduction
Overcrowding and, consequently, long length of stay (LLOS) are two of the starkest examples of poor performance in emergency departments (EDs). According to major media companies, it is not unusual for prospect patients to wait as long as 12 hours for a bed in Italy (visited on May the 30th, 2022) and in the UK (visited on May the 30th, 2022), just to name two of the many examples that can be found online. The roots of this poor performance outcome and their consequences have been investigated in depth. It is a well-known fact that low resource levels, high number of non-urgent visits, and seasonality are among the most common factors originating poor ED performance [1, 2]. Regarding the consequences, it has been established that the overcrowding resulting from poor ED performance reduces the likelihood of hospitalization [3, 4]. It also negatively affects quality of care, as physician forced to work under extreme pressure are more subject to medication errors [5, 6]. As its direst outcome, all else being equal, poor ED performance increases mortality rates [7, 8], thus reducing quality of care particularly for acutely-ill patients [9].
In a recently published article, Yarmohammadian et al. [10] have reviewed the most widely adopted solutions to de-escalate the problem. Such solutions can be broadly classified into two categories, namely resource increase and resource optimization. Obviously, more resources would lead, ceteris paribus, to better outcomes. The alternative is to take the most out of the available ones. One solution in this realm, for example, managed to improve ED performance, i.e. to reduce average LOS and overcrowding, by dividing patients into tracks with different priorities based on predefined clinical parameters [11, 12], also known as FAST tracks [13]. Along this line, more dynamic models have been proposed, which combine a patient’s condition with LOS level [14]. Traditionally, all these remedies have been focused on the processes or outcomes of core ED functions (e.g. triage, resuscitation, diagnostics, treatment and disposition of patients) and time targets [15, 16].
A less conventional approach to improve ED performance has been lately proposed by Wachtel and Elalouf [1]. They have argued that, in addition to clinical factors, other less obvious elements may affect it. To elicit these additional factors, Wachtel and Elalouf [1] have collected and subsequently analyzed the effect on ED performance of various clinical and non-clinical variables. Finally, they have developed an algorithm, based on the value of the relevant parameters, that hospital managers can utilize, among other things, to reduce average LOS and overcrowding. Their results indicate that along intuitively reasonable clinical factors, such as a patient’s blood test and heart rate upon arrival, one non-clinical variable is also highly significant in determining ED performance, namely the patient’s number of accompanying escorts.
Wachtel and Elalouf’s [1] approach suggests a paradigm change in trying to improve EDs’ performance. In this new setting, the focus shifts from the patient’s physical condition to a more holistic perspective, namely the status of the organization, the hospital, and its stakeholders [15, 17]. In Wachtel and Elalouf’s [1] case, the patient’s number of accompanying escorts affect the agility with which the ED reacts. Thus, an alternative way to frame the problem and enrich it with new insights is to examine those organizational factors that can be used to improve the management of patients.
Along the above line of enquiry, this study develops a framework to use administrative health data to analyze patients’ flow through EDs. Administrative health data are records of service provision produced routinely by healthcare providers. They have gained popularity in research due to their numerous advantages, among which availability, low cost, large sample size and population coverage [18]. Administrative data allow for investigation of a wide range of aspects, including surgical interventions, treatments, healthcare access, costs of care and variations in resource use [19], but their crucial role in healthcare research is attributable to their use for performance evaluation of health services [20].
Healthcare providers use their administrative data to quantify and compare the performance of selected aspects of care [21]. Evaluation procedures are based on the calculation of measures and indicators, and the results are used for quality control and improvement as well as in mandatory reporting [22]. Thus, routine data represent an important source for performance monitoring at different levels, such as macro (national, regional, local) or micro (individual provider, clinical area, etc.).
The goal of this study is to harness the value of routinely produced administrative records by developing a framework for multidimensional performance evaluation of EDs that would support internal decision stakeholders in managing operations. This work responds to the need of a practical tool to be used by ED stakeholders for support in decision-making, allowing for prompt problems detection in care quality and care delivery based on already available information. To the best of the authors’ knowledge, the only study offering a practical framework for ED performance management has been conducted by Núñez et al. [23]. The authors proposed a set of 75 performance indicators related to processes carried out by EDs that are relevant for monitoring purposes. However, one serious limitation to the Núñez et al. [23] framework’s application is that the data is uses are not available, but need to be purposely collected, thus adding an additional burden on an already resource-constrained system. The framework developed in this study, also validated by a series of interviews with a panel of experts, overcomes this limitation as it uses data that are collected on a daily basis by hospitals’ Information Systems.
Methods
In this research, no administrative record related to any patient has been examined, cited, nor referred to. Data in this article consists of theoretical indicators extracted from published scientific papers (properly cited). On the other hand, for the interviews evaluating the framework, i.e., its indicators, doctors and managers provided their verbal consent. Anyway, the Ethical Committee expressed a favorable opinion on the diffusion of the contents in this manuscript.
In detail, the research method is comprised of two broad phases: 1) the design of a multidimensional framework to harness hospital administrative data; 2) an expert session to evaluate the framework. This method loosely follows Gu and Itoh [24], Mantwill, Monestel-Umaña, and Schulz [25], and Stremersch and Van Dyck [26].
In the first phase, the framework is designed bottom up using the logic suggested by Keegan, Eiler, and Jones [27], which includes a definition of the strategic objectives and a decision of what to measure to reach such objectives. In practice, the following steps have been performed:
- definition of administrative data and overview of the information they contain;
- definition of the framework’s characteristics from the perspective of ED decision stakeholders, and the corresponding key attributes of measures and indicators;
- review of the academic literature on ED performance measures and indicators;
- selection of metrics to include in the framework based on the characteristics defined in step 2);
- organization of the selected indicators into a framework.
In step 4), particular emphasis has been given to the selection of measures that are easy to calculate using information already available in ED administrative data sources. This is to avoid that the managerial framework be yet another burden causing information overload and poor performance [24].
Regarding the second phase, four stakeholders have been identified for the semi-structured interviews, which took place in May 2022. They represent target users of the framework with different roles [28]. This role-wise classification reflects diverse professional figures (administrative, clinical, operational) as well as different ways they interact with data in hospital’s information systems. The General Hospital Manager, the Controller, the ED Clinical Director and the COU (Complex Operating Unit) Clinical Director have been selected for the interviews. None of the interviewees mentioned anything that could be used to identify specific patients, clinical records, and/or the interviewees themselves. Interviews are therefore fully anonymous. As a consequence, for the correct interpretation of the assessment results, a general profile of each interview participant is presented in Table 1.
The semi-structured interviews have been conducted individually with each stakeholder in Table 1. Interviews started with a brief description of administrative data, the explanation of study aim and methodology, and the specification of the framework. Then, the participants have been asked to evaluate different aspects of the framework. In one case, the evaluation consisted in applying a binary scale (see S1 Table); in another case, a five-point Likert scale ranging from 1 (highly irrelevant) to 5 (highly relevant) has been used (see S2 Table). For the evaluation, the indicators of the original framework have been aggregated into groups based on the indicators’ scope and dimension, such as “Quality patient-related metrics: Adverse events”. To illustrate each group, some examples of the indicators belonging to the groups were given together with the questionnaire (see S2 Table).
Framework’s design
This section describes the results of the five steps of method’s phase 1).
Administrative data
Administrative data are records on care delivery that are collected for management rather than research objectives [29]. They are generated at the patient discharge from the hospital or hospital-based facility as part of standard coding procedures [30]. Although databases vary in the design and data covered [31], they typically contain basic administrative information about the patient (e.g. age, gender, race, residence code, admission and discharge date), limited clinical data regarding hospital stay (e.g. patient’s conditions, procedures received, vital signs, drugs prescribed), and financials (e.g. source of payment, cost of procedures) [32, 33]. Considering their nature, they present some non-trivial ethical challenges, particularly regarding their storage and accessibility [34].
In health services research, two major types of administrative data are used, namely claims data and hospital discharge abstracts [33]. The former represents billing data to the insurer generated by providers for reimbursement purposes; the latter describes hospital-based services for inpatients abstracted from medical records. The corresponding data sources are called physician billing and hospitalization databases respectively [35]. The main difference is that claims data files are limited to what is billed to the specific payer and provide information across a range of inpatient and outpatient providers as long as the person remains enrolled, while hospital discharge abstract data sets contain records on hospitalizations and services of patients within that hospital only, regardless of payer [33].
This work investigates performance evaluation with administrative data of ED setting. It therefore considers hospital discharge abstract data; in particular, records from the hospital information system regarding services provided to patients in the ED department.
Defining framework characteristics and key attributes of measures
Performance assessment tools can be designed to provide both qualitative and quantitative outcomes. The most widespread assessment approach is the calculation of performance indicators, which represent “statistical devices” designed to highlight potential problems in care, usually by demonstrating the distance between the calculated value and a benchmark [36]. The development and application of performance assessment tools differ depending on the stakeholders they aim to support [37]. For instance, clinical safety and treatment efficiency indicators may be of higher priority for clinicians, while patient satisfaction could be a main concern of a hospital administrator [38].
The framework developed in this work takes the perspective of ED stakeholders: managers and physicians, who manage operations and are the decision-makers for care delivery organization and planning. Hence, it is designed to provide care delivery performance monitoring and to enable prompt identification of areas for improvement. Furthermore, considering how resource-constrained hospitals are, the framework is conceived so as its implementation and usage do not represent an additional financial burden and do not require extra resource utilization [24, 39].
The selected performance indicators must possess technical and value attributes (Table 2). The technical attributes mean that the indicators must be timely & simple, costless, and available. The value attributes mean the indicators are also valid, general and scalable. Jointly, the two attribute types ensure that the indicators are easy to apply in real-life contexts and provide enough information for an holistic performance evaluation [24, 36, 40].
The use of administrative data sources guarantees that the indicators are costless and available (they are already being collected); the analysis of the academic literature, presented below, ensures validity; the remaining features (scalability, generality, timeliness, and simplicity) are assessed by the authors together with the interviewees.
Literature review
This section describes the review of the academic literature to identify the most appropriate metrics (indicators and measures) used for ED performance assessment following the criteria in Table 2. The review is based on the widely accepted protocols developed by Tranfield, Denyer, and Smart [41] and Greenhalgh [42].
Scopus and Web of Science (WoS) have been used in combination to identify papers containing ED performance indicators and measures. The following search strings, “performance indicator*”, “performance measure*”, “emergency department”, “ED” have been looked for in the papers’ title, abstract and keywords. No time restrictions have been applied. An additional search has been made to capture those metrics using administrative data. The search strings in this case included, “administrative data”, “claims”, “routine data”, and “performance”. In both cases, the terms “overall”, “global”, “general”, and “whole” have been added to capture general-purpose indicators, as indicated in Table 2. Only peer-reviewed research papers and reviews written in English have been considered.
The screening process included the following steps (the resulting number of articles is provided in parentheses): citation extraction from the two databases (n = 108), elimination of duplicates (n = 22), title screening (n = 86), abstract screening (n = 80), and full-text analysis (n = 40). Moreover, each article was required to include metrics with the following additional criteria:
- explicitly referred to as measuring performance;
- designed or applied to ED setting;
- evaluate overall performance;
- not being specific to any clinical conditions (diseases or presenting complaints).
Among the 40 selected articles, 24 fulfilled the additional criteria above. Those were selected for full-text analysis. At this point, the references of each of the 24 articles were scanned for relevant material. Three more articles have been included as a result. The final sample consists of 27 articles.
Emergency department performance framework
A total of 877 performance metrics have been identified in the 27 studies included in the full-text review. Applying the selection criteria defined above, 224 metrics (measures and indicators) have been extracted and organized into the performance measurement framework presented in Fig 1.
The framework is designed as a cube composed of metrics’ groups within the three related measurement dimensions: performance, analysis unit, and time-period. The performance dimension is represented by quality of care and time-efficiency [36, 43, 44]. This classification is drawn from the primary function of an emergency department, as highlighted in the report Institute of Medicine [45]: the provision of high-quality care in a timely way. The analysis unit includes the entities in a performance evaluation procedure, namely physician, nurse, disease, triage category etc. as well as the overall facility level. The time-period allows to set timeframes of performance analysis, such as a month, a quarter, or a year. Overall, the analysis unit and time-period help in data grouping and filtering, while the performance dimension reveals the evaluation domain.
Performance metrics identified in the literature have been organized into five groups within the performance dimension. They are represented in the cells on the cube’s top layer. Following performance dimension representation, the “temporal” and “quality” metrics have been specified, which in turn have been divided into patient-related and provider-related. This arrangement results in four performance metrics groups: temporal patient-related, temporal provider-related, quality patient-related, and quality provider-related. The quality metrics are those designed for evaluation of the care delivery process and patient outcomes and specified by numbers or proportions of favorable or unfavorable results. The temporal metrics are the waiting times concerning the patient journey and related operations, expressed in time units, such as hours and minutes. The throughput metrics have been placed at the intersection of these four groups. These metrics describe the ED’s usage through the patient flow analysis. They do not represent ED performance directly as quality or temporal metrics, but provide information regarding the care process organization. By their nature, they can help reveal the causes of time inefficiencies and bad quality outcomes. In other words, they represent the connection between measurement and management [46].
The cube’s underlying idea is that a hospital stakeholder can choose a “slice” at the evaluation and time-period level of interest and calculate different types of performance metrics (e.g., yearly report of guideline adherence by individual physician). The choice depends on the goal of performance measurement and the desired detail. Therefore, the cube represents a flexible instrument that makes it possible to create targeted performance reports. The selected 224 performance metrics, which have been organized into the five performance groups defined above (Fig 1), are presented in Tables 3–7 below. Their description is reported in the original form. Table 3 represents the group of temporal patient-related metrics, where all 41 extracted performance metrics can be referred to as patients’ length of stay (LOS) between various points of a care pathway, such as arrival, triage, and care provision. The 23 metrics representing the provider-related time group have been categorized in Decision-related times (12), Interventions and procedural times (8), and Triage time-efficiency (3), as shown in Table 4.
The patient-related quality group is presented in Table 5. It is composed of 42 metrics representing the three types of patient outcomes, namely patient leaves (18), unplanned return visits within different timeframes (18), and adverse events (6). Table 6 shows provider-related quality performance metrics, which represent results of the care delivery and decisions made by medical personnel. It is the largest group with a total of 63 performance metrics, which have been categorized as Treatment Decisions: Procedures (21) and drugs (6), Documentation (21), and Guideline adherence (15). The first category represents decisions regarding performed procedures and drug administration; the second evaluates the patient data collection practice; and the last one helps to reveal the clinical practice behavior of healthcare personnel.
The last group is composed of throughput metrics, which represent counts of patients using the ED system (Table 7). The selected 55 metrics have been categorized into Patient flow by disposition (25): Admission (15), Transfer (8), and Discharge (2), Overall ED occupancy (24); and Patient flow by medical stuff (6).
Use-case example
The following example helps visualizing how the framework can be used in a real-world scenario. A hospital manager is asked by a director of one of his/her departments the reason as to why numerous patients with a specific clinical condition have recently visited ED and were discharged. Clearly, it is necessary to analyze the performance of the treatment of these patients in the ED. For example, one may want to study how the process was organized to understand if there is room for the implementation of a procedure to schedule a visit for such patients at discharge without affecting ED’s overcrowding. In this case, the hospital manager may also want to avoid repetition of some costly exams by sending the diagnostic results directly to the clinician with which the follow up visit will be scheduled. However, the hospital does not collect a specific register for this group of patients; on the other hand, it has an administrative database that registers ED attendances. The starting point in the framework application is the study of administrative and clinical information presented in the database. The next step is the choice of performance indicators that can be calculated using the data at hand and are relevant for the investigation.
Fig 2 shows an example of the cube for the described situation: it is sliced at the time-period “single year” and analysis unit “specific diagnosis” dimensions, with the Performance dimension on the front side containing corresponding performance metrics groups and the number of selected indicators in parenthesis. Such high-level representation highlights the dimensions along which performance can be monitored with the available data. At this point, the selected metrics should be calculated and analyzed, enabling the transition from performance measurement to performance management [46].
Experts’ assessment
The framework developed in the previous section has been discussed through a number of semi-structured expert interviews. The main goal of these focus groups has been exploring the thoughts about the framework of key ED figures of a teaching university hospital. All the interviewees are involved in the monitoring of ED performance. They are collectively referred to as decision stakeholders.
Expert interview is a method of qualitative empirical research to detect expert assessment about a topic that has been considerably used for knowledge production. Compared to multiple experiments, it allows to shorten time-consuming data gathering processes, particularly if the experts possess much practical tacit knowledge [71]. It also serves as an alternative to running in situ experiments, which, particularly in the healthcare sector, might result in dire consequences in case of failures.
The results for the questions regarding the awareness of administrative data importance and an overall evaluation of the framework are presented in Fig 3 (S1 Table). The General Hospital Manager is the only stakeholder not fully aware that administrative data can be used for performance assessment. All stakeholders agreed with the proposed clustering of the indicators; they also share the idea that such structure helps evaluating the results. The framework has been judged unanimously as comprehensive for multidimensional performance evaluation and useful to support professional activity, and described as “clear and intuitive”, “useful for planning”, able to “reveal inefficiencies in care process” and “transform existing data into decision support information”.
With respect to ease of implementation in practice, the opinions are different. The Controller and ED Clinical Director indicate that the correct implementation requires effort in terms of time and bureaucratic procedures’ implementation; therefore, they think the framework is not easy to implement. On the contrary, the General Hospital Manager and COU Clinical Director have a more positive attitude about the framework’s implementation. During the sessions, only the Controller has replied that no additional collection and digitalization of information on patient visits would be required to measure the performance with the proposed framework.
As a further step, to demonstrate how the framework can be adapted to the needs of the final user(s), participants have been asked to evaluate the framework’s content with respect to two dimensions, namely motivation and decision-making (S2 Table). The first one refers to the impact that each indicator has on a personal system of incentives toward professional goals. The last one evaluates the usefulness of information provided by an indicator(s) from a specific group for the management of operations (planning, monitoring, control, etc.) and working context analysis.
The results of the evaluation by each stakeholder are presented in Fig 4. Some indicator groups have been given contradictory evaluations, as in the case of 4.1 (Quality provider-related metrics: Treatment decisions regarding procedures, such as CT scans, X-ray counts, lab studies), which has a low importance for the COU Clinical Director and is of high interest for the General Hospital Manager. This is an expected outcome as each indicator group must be considered within its scope of application as well as its relevance for the goals of each stakeholder. In this case, the indicators in the evaluated group provide information that is relevant from the economic point of view, but is inconsequential for treatment decisions.
Focusing on the pattern of each stakeholder result, it can be noted that for most indicator groups the evaluation in both dimensions coincide, so they are placed on the diagonal. This demonstrates that the framework is relatively balanced as a control system because it impacts equally on the motivation to contribute towards corporate objectives and on decision making activities. However, in the evaluation made by the Controller, almost half of the indicator groups (7 out of 15) have been placed above the diagonal, meaning that the decision-making aspect prevails. This can be explained by the administrative-operational focus of its professional activity, which is based on daily data analysis. In addition, such bidimensional evaluation mapping can help in the identification of “core indicators” for each stakeholder. In the present analysis, with a scale from 1 to 5, the most relevant indicator groups are those evaluated 3 or higher in both dimensions (the upper right quadrant in blue in Fig 4).
Discussion
Little attention has been given so far to the usage of administrative data in emergency departments (EDs). On the one hand, for ED functions, performance measurement is traditionally focused on processes or outcomes (e.g., triage, diagnostics, treatment and disposition of patients) and time targets [15, 16]. On the other hand, as lamented by Madsen et al. [15], performance measurement systems in EDs tend to be narrowly focused on small sets of indicators designed for a specific target (e.g., resuscitation rate). Recently, Wachtel and Elalouf [1] have argued that for EDs too, performance must be measured in a holistic, multi-dimensional way. They have proposed to include also non-clinical variables to monitor the performance of EDs (in Wachtel and Elalouf’s [1] case, the number of accompanying escort has been found to be a significant predictor of a patient’s LOS).
This work follows Wachtel and Elalouf’s [1] approach. In particular, this research designs a framework to harness the value of routinely produced records of administrative data to improve performance management of EDs and health systems in general [1, 20, 72, 73]; 2). This is particularly useful as the adoption of this ED performance measurement framework will contribute to the use of underexploited, already available health care data [18]. By leveraging routinely registered records, the framework supports tailored operational decision-making at various organisational levels, resulting in supporting ED management. This is relevant for EDs, which are heavily constrained in terms of time and resources [1, 2].
To be used for measuring performance, rather than registering events, administrative data need to be arranged in a way that makes their information relevant. This is one of the framework’s major contributions. Clearly, administrative data could be arranged in many different ways. Starting from the classic Donabedian’s [74] “structure”, “process”, and “outcome” model for the assessment of the quality of care, to the multiple dimensions (e.g., appropriateness, expenditure, governance, improved care, clinical focus, efficiency, safety, sustainability and timeliness) identified by, among others, Zaadoud, Chbab, and Chaouch [44], Grimmer et al. [75], and Arah et al. [76], many different frameworks focused on patients’ clinical data have been proposed.
The framework in this research abides by the following logic. The core dimension of the framework is evidently performance, as it defines the interpretation of the metrics calculation, while the other two dimensions (analysis unit and time-period) represent their selection bounds. Specifically, performance is measured by taking into account both the analysis unit and the time-period. This three-fold grouping in Fig 1 clusters unorganized administrative data in a way that they can be used for performance measurement because each group produces an unambiguous outcome [36]. In fact, this structure allows for slicing at the desired levels as well as for the creation of metrics sets that allow detailed performance evaluation and reporting (e.g., the monthly throughput for a certain disease). To further improve the framework’s ability to provide meaningful results, the selected performance metrics have been organized into multi-level groups, in Tables 3–7, that further narrows down their scope and make them more precise. Finally, the indicators have been selected from the literature. This ensures that each of them is relevant to measure a particular performance aspect [43]. Nonetheless, indicators have been retained only if they were also timely, simple, costless, and available [44]. The rationale is that the framework is conceived to exploit already available data–administrative data–without increasing the burden on a heavily resource-constrained system [24, 39].
Expert interviews have been performed to obtain feedback on the framework. Expert discussion has been paramount to ensure that the framework actually fulfills its role, i.e., it provides on field decision support [28, 43]. During these interviews, the framework has been recognized as a useful tool for performance monitoring and managing operations. Moreover, the interviews showed that the stakeholders’ professional role and objectives affect the indicators’ perceived importance (Fig 4). This finding is in line with existing literature, which stresses the importance not only of including different indicators, but also of encompassing the perspective of various stakeholders on the same indicators to achieve a comprehensive performance evaluation [77].
Overall, the framework represents a practical tool designed for fast, regular, balanced, and systematic internal ED performance measurement and control, whereby professionals with different roles can select the subsets of indicators mostly tailored to their diverse needs [24, 78]. The framework therefore supports decision-making with respect to specific interests, responsibilities and objectives at each level of the organization. This contribution timely addresses recognized difficulties of decision-makers to choose a performance measurement system that would suit their needs without increasing their organization’s financial burden [79].
Conclusions and limitations
Administrative data can be a new cornerstone for health care operation management. Using the existing information systems for decision-making support is essential in the resource-constrained hospital environment. This paper proposes a practical framework for performance analysis of hospital emergency departments based on administrative health records. It is aimed to assist decision stakeholders in regular and systematic control of ED performance at the desired level of details in terms of analysis unit, time period and performance dimension. The flexible design allows to identify the core indicators for each target user with respect to his/her professional role and objectives. As a mean of preliminary validation, the framework has been discussed with key ED decision stakeholders of a teaching hospital and it has been judged as comprehensive and useful for managing operations. The current work is a starting point for ED stakeholders to exploit the available administrative data sources to derive valuable performance information for decision‐making. In fact, the framework could provide a blueprinting for more advanced, data-driven service design applications, such as BPMN-based approaches for the reengineering of healthcare processes [80, 81].
This research has some limitations. First, in the choice of the metrics to include in the framework only one possible type of administrative data has been considered: the records on care delivery. This implies that the selected indicators refer to care delivery processes and outcomes. At the same time, other types of administrative data sources may be used to evaluate additional performance dimensions, such as structural (attributes of the care settings: equipment, resources etc.) or financial (costs) dimensions [15, 82]. Second, the included metrics have been selected to be computationally simple, as the framework is designed as a practical screening tool to be used by hospital decision makers for performance monitoring and control without any further resource needed. Without this constrain, it would evidently be possible to include other relevant indicators.
Supporting information
S1 Table. Experts’ general framework evaluation.
https://doi.org/10.1371/journal.pone.0293401.s001
(DOCX)
S2 Table. Experts’ assessment of the framework’s content.
https://doi.org/10.1371/journal.pone.0293401.s002
(DOCX)
Acknowledgments
We would like to express our gratitude to Professor Massimo Federici, Dr. Tiziana Frittelli, Dr. Paolo Furnari, and Professor Jacopo Legramante of the Policlinico Tor Vergata Hospital, who shared their expertise with us during the preparation of this research. We appreciate provided insights and comments regarding the results proposed in the paper. The opinions expressed in this publication are those of the authors and do not necessarily represent those of the Policlinico Tor Vergata Hospital, their officers, or employees.
References
- 1. Wachtel Guy, and Elalouf Amir. 2020. “Addressing Overcrowding in an Emergency Department: An Approach for Identifying and Treating Influential Factors and a Real-Life Application.” Israel Journal of Health Policy Research 9 (1): 1–12.
- 2. Hoot Nathan R., and Aronsky Dominik. 2008. “Systematic Review of Emergency Department Crowding: Causes, Effects, and Solutions.” Annals of Emergency Medicine 52 (2): 126–136.e1. pmid:18433933
- 3. Jung Hae Min, Min Joung Kim, Ji Hoon Kim, Yoo Seok Park, Hyun Soo Chung, Sung Phil Chung, et al. 2021. “The Effect of Overcrowding in Emergency Departments on the Admission Rate According to the Emergency Triage Level.” PLOS ONE 16 (2): e0247042. pmid:33596264
- 4. Abir Mahshid, Goldstick Jason E., Malsberger Rosalie, Williams Andrew, Bauhoff Sebastian, Parekh Vikas I., et al. 2019. “Evaluating the Impact of Emergency Department Crowding on Disposition Patterns and Outcomes of Discharged Patients.” International Journal of Emergency Medicine 12 (1): 1–11.
- 5. Kulstad Erik B., Sikka Rishi, Sweis Rolla T., Kelley Ken M., and Rzechula Kathleen H. 2010. “ED Overcrowding Is Associated with an Increased Frequency of Medication Errors.” American Journal of Emergency Medicine 28 (3): 304–9. pmid:20223387
- 6. Weissman Joel S., Rothschild Jeffrey M., Bendavid Eran, Sprivulis Peter, Francis Cook E., Scott Evans R., et al. 2007. “Hospital Workload and Adverse Events.” Medical Care 45 (5): 448–55. pmid:17446831
- 7. Singer Adam J., Thode Henry C., Viccellio Peter, and Pines Jesse M. 2011. “The Association between Length of Emergency Department Boarding and Mortality.” Academic Emergency Medicine 18 (12): 1324–29. pmid:22168198
- 8. Chalfin Donald B., Trzeciak Stephen, Likourezos Antonios, Baumann Brigitte M., and Phillip Dellinger R. 2007. “Impact of Delayed Transfer of Critically Ill Patients from the Emergency Department to the Intensive Care Unit.” Critical Care Medicine 35 (6): 1477–83. pmid:17440421
- 9. Horwitz Leora I., Green Jeremy, and Bradley Elizabeth H. 2010. “US Emergency Department Performance on Wait Time and Length of Visit.” Annals of Emergency Medicine 55 (2): 133–41. pmid:19796844
- 10. Yarmohammadian Mohammad H., Rezaei Fatemeh, Haghshenas Abbas, and Tavakoli Nahid. 2017. “Overcrowding in Emergency Departments: A Review of Strategies to Decrease Future Challenges.” Journal of Research in Medical Sciences: The Official Journal of Isfahan University of Medical Sciences 22 (1). pmid:28413420
- 11. Wylie Kate, Crilly Julia, Ghasem (Sam) Toloo, Gerry Fitzgerald, John Burke, Ged Williams, et al. 2015. “Review Article: Emergency Department Models of Care in the Context of Care Quality and Cost: A Systematic Review.” Emergency Medicine Australasia: EMA 27 (2): 95–101. pmid:25752589
- 12. Oredsson Sven, Jonsson Håkan, Rognes Jon, Lind Lars, Katarina E. Göransson, Anna Ehrenberg, et al. 2011. “A Systematic Review of Triage-Related Interventions to Improve Patient Flow in Emergency Departments.” Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 19 (July). pmid:21771339
- 13. Karpiel Martin, and Williams Michael. 1988. “Developing a FAST TRACK Program.” Journal of Ambulatory Care Marketing 2 (2): 35–48. pmid:10303682
- 14. Widgren Bengt R., and Jourak Majid. 2011. “Medical Emergency Triage and Treatment System (METTS): A New Protocol in Primary Triage and Secondary Priority Decision in Emergency Medicine.” The Journal of Emergency Medicine 40 (6): 623–28. pmid:18930373
- 15. Madsen Michael, Sampsa Kiuru, Maaret Castrèn, and Lisa Kurland. 2015. “The Level of Evidence for Emergency Department Performance Indicators: Systematic Review,” 298–305. pmid:25969341
- 16. Wakai Abel, Ronan O’Sullivan Paul Staunton, Walsh Cathal, Hickey Fergal, and Patrick K Plunkett. 2013. “Development of Key Performance Indicators for Emergency Departments in Ireland Using an Electronic Modified-Delphi Consensus Approach.” European Journal of Emergency Medicine 20: 109–14. pmid:22382650
- 17. Loban Ekaterina, Scott Cathie, Lewis Virginia, and Haggerty Jeannie. 2021. “Measuring Partnership Synergy and Functioning: Multi-Stakeholder Collaboration in Primary Health Care.” PLOS ONE 16 (5): e0252299. pmid:34048481
- 18. Cook J. A., and Collins G. S. 2015. “The Rise of Big Clinical Databases.” British Journal of Surgery 102 (2): 93–101. pmid:25627139
- 19. Ferver Kari, Burton Bryan, and Jesilow Paul. 2009. “The Use of Claims Data in Healthcare Research.” The Open Public Health Journal 2 (1): 11–24.
- 20. Roder David, and Buckley Elizabeth. 2017. “Administrative Data Provide Vital Research Evidence for Maximizing Health-System Performance and Outcomes.” pmid:28120376
- 21. Roski Joachim, and Mark McClellan. 2011. “Measuring Health Care Performance Now, Not Tomorrow: Essential Steps To Support Effective Health Reform.” Health Affairs 30 (4): 682–89. pmid:21471489
- 22. Powell A E, Davies H T O, and Thomson R G. 2003. “Using Routine Comparative Data to Assess the Quality of Health Care: Understanding and Avoiding Common Pitfalls.” BMJ Quality & Safety 12 (2): 122–28. pmid:12679509
- 23. Núñez Alicia, Neriz Liliana, Mateo Ricardo, Ramis Francisco, and Ramaprasad Arkalgud. 2018. “Emergency Departments Key Performance Indicators: A Unified Framework and Its Practice.” Int J Health Plann Mgmt, no. March: 1–19. pmid:29882383
- 24. Gu Xiuzhu, and Itoh Kenji. 2016. “Performance Indicators: Healthcare Professionals’ Views.” International Journal of Health Care Quality Assurance 29 (7): 801–15. pmid:27477935
- 25. Mantwill Sarah, Silvia Monestel-Umaña, and Peter J. Schulz. 2015. “The Relationship between Health Literacy and Health Disparities: A Systematic Review.” PLOS ONE 10 (12): e0145455. pmid:26698310
- 26. Stremersch Stefan, and Walter Van Dyck. 2009. “Marketing of the Life Sciences: A New Framework and Research Agenda for a Nascent Field.” Journal of Marketing 73 (4): 4–30.
- 27. Keegan Daniel P., Eiler Robert G., and Jones Charles P. 1989. “Are Your Performance Measures Obsolete?” Management Accounting, no. June: 45–50.
- 28. Munik Juliano, Edson Pinheiro de Lima, Fernando Deschamps, Sergio E. Gouvea Da Costa, Eileen M. Van Aken, José Marcelo Almeida Prado Cestari, et al. 2021. “Performance Measurement Systems in Nonprofit Organizations: An Authorship-Based Literature Review.” Measuring Business Excellence 25 (3): 245–70.
- 29. Coster De, Carolyn Hude Quan, Finlayson Alan, Gao Min, Halfon Patricia, Karin H Humphries, et al. 2006. “Identifying Priorities in Methodological Research Using ICD-9-CM Consortium.” BMC Health Services Research 6 (1): 77. pmid:16776836
- 30. Kahn Laura H, Jan Blustein, Raymond R Arons, Raymond Yee, and Steven Shea. 1996. “The Validity of Hospital Administrative Data in Monitoring Variations in Breast Cancer Surgery.” American Journal of Public Health 86 (2): 243–45. pmid:8633744
- 31. Cadarette Suzanne M, and Lindsay Wong. 2015. “An Introduction to Health Care Administrative Data.” The Canadian Journal of Hospital Pharmacy 68 (3): 232–37. pmid:26157185
- 32. Steiner Claudia, Elixhauser Anne, and Schnaier Jenny. 2002. “The Healthcare Cost and Utilization Project: An Overview.” Effective Clinical Practice: ECP 5 (3): 143–51. pmid:12088294
- 33. Schwartz Rachel M, David E Gagnon, Janet H Muri, Rose Q Zhao, and Russell Kellogg. 1999. “Administrative Data for Quality Improvement.” Pediatrics 103 (1 SUPPL.): 291–301. pmid:9917472
- 34. Zanetti Jasna Karacic, and Rui Nunes. 2023. “To Wallet or Not to Wallet: The Debate over Digital Health Information Storage.” Computers 12 (6): 114.
- 35. Bernatsky S, L Joseph , Pineau C A, Bélisle P, Boivin J F, Banerjee D, et al. 2009. “Estimating the Prevalence of Polymyositis and Dermatomyositis from Administrative Data: Age, Sex and Regional Differences.” Ann Rheum Dis 68: 1192–1196. pmid:18713785
- 36. Traberg Andreas, Jacobsen Peter, and Nadia Monique Duthiers. 2014. “Advancing the Use of Performance Evaluation in Health Care.” Journal of Health Organization and Management 28 (3): 422–36. pmid:25080653
- 37. Campbell SM, Braspenning J, Hutchinson A, and Marshall M. 2002. “Research Methods Used in Developing and Applying Quality Indicators in Primary Care.” Qual Saf Health Care 11 (4): 358–64. pmid:12468698
- 38. Cameron P. A., Schull M. J., and Cooke M. W. "A framework for measuring quality in the emergency department." Emergency Medicine Journal 28, no. 9 (2011): 735–740.
- 39. Furnham Adrian. 2004. “Performance Management Systems.” European Business Journal 16 (2): 83–94.
- 40. Neely Andy. 1998. “Three Modes of Measurement: Theory and Practice.” International Journal of Business Performance Management 1 (1): 47–64.
- 41. Tranfield David, Denyer David, and Smart Palminder. 2003. “Towards a Methodology for Developing Evidence‐informed Management Knowledge by Means of Systematic Review.” British Journal of Management 14 (3): 207–22.
- 42. Greenhalgh Trisha. 1997. “Papers That Summarise Other Papers (Systematic Reviews and Meta-Analyses).” BMJ: British Medical Journal 315 (7109): 672–75. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2127461/. pmid:9310574
- 43. Bréant Claudine, Succi Laurent, Cotten Michel, Grimaud Stéphane, Iavindrasana Jimison, Kindstrand Maria, et al. 2020. “Tools to Measure, Monitor, and Analyse the Performance of the Geneva University Hospitals (HUG).” Supply Chain Forum: An International Journal 21 (2): 117–31.
- 44. Zaadoud Brahim, Chbab Youness, and Chaouch Aziz. 2020. “Do Performance Measurement Models Have Any Impact on Primary Health Care? A Systematic Review.” International Journal of Health Governance 25 (4): 319–34.
- 45.
Institute of Medicine. 2001. “Crossing the Quality Chasm: A New Health System for the 21st Century.” Washington DC.
- 46. Purbey Shankar, Mukherjee Kampan, and Bhar Chandan. 2007. “Performance Measurement System for Healthcare Processes.” International Journal of Productivity and Performance Management 56 (3): 241–51.
- 47. Abo-Hamad Waleed, and Arisha Amr. 2013. “Simulation-Based Framework to Improve Patient Experience in an Emergency Department.” European Journal of Operational Research 224 (1): 154–66.
- 48. Mcclelland Mark Stephen, Jones Karen, Siegel Bruce, and Jesse M Pines. 2012. “A Field Test of Time-Based Emergency Department Quality Measures.” Annals of Emergency Medicine 59 (1): 1–10. pmid:21868129
- 49. Huang I-Anne, Pao-Lan Tuan, Tang-Her Jaing, Chang-Teng Wu, Minston Chao, Hui-Hsuan Wang, et al. 2016. “Comparisons between Full-Time and Part-Time Pediatric Emergency Physicians in Pediatric Emergency Department.” Pediatrics and Neonatology 57 (5): 371–77. pmid:27178642
- 50. Alessandrini Evaline, Varadarajan Kartik, Elizabeth R Alpern, Marc H Gorelick, Kathy Shaw, Richard M Ruddy, et al. 2011. “Emergency Department Quality: An Analysis of Existing Pediatric Measures.” Academic Emergency Medicine 18 (5): 519–26. pmid:21569170
- 51. Arya Rajiv, Danielle M Salovich, Pamela Ohman-Strickland, and Mark A Merlin. 2010. “Impact of Scribes on Performance Indicators in the Emergency Department.” Academic Emergency Medicine 17 (5): 490–94. pmid:20536801
- 52. Ashour Omar M, and Gül E Okudan Kremer. 2013. “A Simulation Analysis of the Impact of FAHP–MAUT Triage Algorithm on the Emergency Department Performance Measures.” Expert Systems With Applications 40 (1): 177–87.
- 53. Jones Peter, Chalmers Linda, Wells Susan, Ameratunga Shanthi, Carswell Peter, Ashton Toni, et al. 2012. “Implementing Performance Improvement in New Zealand Emergency Departments: The Six Hour Time Target Policy National Research Project Protocol.” BMC Health Services Research 12 (1): 45. pmid:22353694
- 54. Michelson Kenneth A, Todd W Lyons, Joel D Hudgins, Jason A Levy, Michael C Monuteaux, Jonathan A Finkelstein, et al. 2018. “Use of a National Database to Assess Pediatric Emergency Care Across U.S Emergency Departments.” Academic Emergency Medicine 25 (12): 1355–64. pmid:29858524
- 55. Rupp Kyle J, Nathan J Ham, Dennis E Blankenship, Mark E Payton, and Kelly A Murray. 2017. “Pre and Post Hoc Analysis of Electronic Health Record Implementation on Emergency Department Metrics.” Baylor University Medical Center Proceedings 30 (2): 147–50. pmid:28405062
- 56. Kang Hyojung, Harriet Black Nembhard, Colleen Rafferty, and Christopher J Deflitch. 2014. “Patient Flow in the Emergency Department: A Classi Fi Cation and Analysis of Admission Process Policies.” Annals of Emergency Medicine 64 (4): 335–342.e8. pmid:24875896
- 57. Pines Jesse M., Hollander Judd E., Localio Russell A., and Metlay Joshua P. 2006. “The Association between Emergency Department Crowding and Hospital Performance on Antibiotic Timing for Pneumonia and Percutaneous Intervention for Myocardial Infarction.” Academic Emergency Medicine, no. 13: 873–78. pmid:16766743
- 58. Aaronson Emily L, Regan H Marsh, Moytrayee Guha, Jeremiah D Schuur, and Shada A Rouhani. 2016. “Emergency Department Quality and Safety Indicators in Resource-Limited Settings: An Environmental Survey.” International Journal of Emergency Medicine, no. 2015. pmid:26520848
- 59. White Benjamin A, David F M Brown, Julia Sinclair, Yuchiao Chang, Sarah Carignan, Joyce Mcintyre, et al. 2012. “Supplemented Triage And Rapid Treatment (Start) Improves Performance Measures In The Emergency Department.” JEM 42 (3): 322–28. pmid:20554420
- 60. Dudas Robert A;, David Monroe, and Melissa Borger Mccolligan. 2011. “Community Pediatric Hospitalists Providing Care in the Emergency Department.” Pediatr Emer Care 27 (11): 1099–1103.
- 61. Sørup Christian Michel, Jacobsen Peter, and Jakob Lundager Forberg. 2013. “Evaluation of Emergency Department Performance–a Systematic Review on Recommended Performance and Quality-in-Care Measures.” Scandinavian Journal of Trauma, Resuscitation and Emergency Medicine 21 (1): 62. pmid:23938117
- 62. Green Janette, Dawber James, Masso Malcolm, and Eagar Kathy. 2014. “Emergency Department Waiting Times: Do the Raw Data Tell the Whole Story?” Australian Health Review 38 (1): 65–69. pmid:24433850
- 63. Davis Rebecca A, Michael M Dinh, Kendall J Bein, Anne-Sophie Veillard, and Timothy C Green. 2014. “Senior Work-up Assessment and Treatment Team in an Emergency Department: A Randomised Control Trial.” Emergency Medicine Australasia 26 (4): 343–49. pmid:24935075
- 64. Stang Antonia S, Lisa Hartling, Cassandra Fera, David Johnson, and Samina Ali. 2014. “Quality Indicators for the Assessment and Management of Pain in the Emergency Department: A Systematic Review.” Pain Research and Management 19 (6): 179–90. pmid:25337856
- 65. Fraser Jacqueline, Atkinson Paul, Gedmintas Audra, Howlett Michael, Rose McCloskey, and James French. 2017. “A Comparative Study of Patient Characteristics, Opinions, and Outcomes, for Patients Who Leave the Emergency Department before Medical Assessment.” Canadian Journal of Emergency Medicine 19 (5): 347–54. pmid:27692013
- 66. Shy Bradley D, Jason S Shapiro, Peter L Shearer, Nicholas G Genes, Cindy F Clesca, Reuben J Strayer, et al. 2015. “A Conceptual Framework for Improved Analyses of 72-Hour Return Cases.” American Journal of Emergency Medicine 33 (1): 104–7. pmid:25303847
- 67. Meek Robert, Braitberg George, Nicolas Caroline, and Kwok Gabriel. 2012. “Effect on Emergency Department Efficiency of an Accelerated Diagnostic Pathway for the Evaluation of Chest Pain.” Emergency Medicine Australasia 24 (1): 285–93. pmid:22672169
- 68. Mats Warmerdam, Stolwijk Frank, Boogert Anjelica, Sharma Meera, Tetteroo Lisa, Lucke Jacinta, et al. (2017) Initial disease severity and quality of care of emergency department sepsis patients who are older or younger than 70 years of age. PLoS ONE 12(9): e0185214. pmid:28945774
- 69. Ekins Kylie, and Morphet Julia. 2015. “The accuracy and consistency of rural, remote and outpost triage nurse decision making in one Western Australia Country Health Service Region.” Australasian Emergency Nursing Journal 18 (4): 227–233. pmid:26220101
- 70. Husain Nadia, Kendall J Bein, Timothy C Green, Anne-Sophie Veillard, and Michael M Dinh. 2015. “Real Time Shift Reporting by Emergency Physicians Predicts Overall ED Performance.” Emerg Med J 32 (2): 130–33. pmid:24022112
- 71.
Bogner Alexander, Littig Beate, and Menz Wolfgang, eds. Interviewing experts. Springer, 2009.
- 72. Nyawira Lizah, Mbau Rahab, Jemutai Julie, Musiega Anita, Hanson Kara, Molyneux Sassy, et al. 2021. “Examining Health Sector Stakeholder Perceptions on the Efficiency of County Health Systems in Kenya.” PLOS Global Public Health 1 (12): e0000077. pmid:36962100
- 73. Fetene Netsanet, Canavan Maureen E., Megentta Abraham, Linnander Erika, Tan Annabel X., Nadew Kidest, et al. 2019. “District-Level Health Management and Health System Performance.” PLOS ONE 14 (2): e0210624. pmid:30707704
- 74. Donabedian A. 1988. “The Quality of Care. How Can It Be Assessed?” JAMA 260 (12): 1743–48. pmid:3045356
- 75. Grimmer Karen, Lizarondo Lucylynn, Kumar Saravana, Bell Erica, Buist Michael, and Weinstein Philip. 2014. “An Evidence-Based Framework to Measure Quality of Allied Health Care.” Health Research Policy and Systems 12 (1): 1–10. pmid:24571857
- 76. Arah Onyebuchi A., Westert Gert P., Hurst Jeremy, and Klazinga Niek S. 2006. “A Conceptual Framework for the OECD Health Care Quality Indicators Project.” International Journal for Quality in Health Care: Journal of the International Society for Quality in Health Care 18 Suppl 1 (SUPPL. 1): 5–13. pmid:16954510
- 77. Harrington Charlene, Schnelle John F., Margaret McGregor, and Sandra F. Simmons. 2016. “Article commentary: The need for higher minimum staffing standards in US nursing homes.” Health services insights 9: HSI-S38994.
- 78. Weir Erica, Kurji Karim, and Robinson Victoria. 2009. “Applying the Balanced Scorecard to Local Public Health Performance Measurement: Deliberations and Decisions.” BMC Public Health 9 (1): 1–7. pmid:19426508
- 79. Ravelomanantsoa Michel Stella, Ducq Yves, and Vallespir Bruno. 2019. “A state of the art and comparison of approaches for performance measurement systems definition and design.” International Journal of Production Research 57(15–16): 5026–5046.
- 80.
Antonacci, Grazia, Armando Calabrese, Andrea D’Ambrogio, Andrea Giglio, Benedetto Intrigila, and Nathan Levialdi Ghiron, 2016, “A BPMN-Based Automated Approach for the Analysis of Healthcare Processes.” IEEE 25th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), Paris, France, 2016, pp. 124–129, https://doi.org/10.1109/WETICE.2016.35
- 81. Calabrese Armando, and Michele Corbò, 2015, “Design and blueprinting for total quality management implementation in service organisations.” Total Quality Management & Business Excellence, 26(7–8): 719–732.
- 82. Cremonesi Paolo, Enrico di Bella Marcello Montefiori, and Persico Luca. 2015, “The robustness and effectiveness of the triage system at times of overcrowding and the extra costs due to inappropriate use of emergency departments.” Applied health economics and health policy 13(5): 507–514. pmid:25854901