Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Using the Electronic Medical Record to Identify Community-Acquired Pneumonia: Toward a Replicable Automated Strategy

  • Sylvain DeLisle ,

    sdelisle@umaryland.edu

    Affiliations Department of Medicine, Veterans Affairs Maryland Health Care System, Baltimore, Maryland, United States of America, Department of Medicine, University of Maryland, Baltimore, Maryland, United States of America

  • Bernard Kim,

    Affiliations Department of Medicine, Veterans Affairs Maryland Health Care System, Baltimore, Maryland, United States of America, Department of Medicine, University of Maryland, Baltimore, Maryland, United States of America

  • Janaki Deepak,

    Affiliations Department of Medicine, Veterans Affairs Maryland Health Care System, Baltimore, Maryland, United States of America, Department of Medicine, University of Maryland, Baltimore, Maryland, United States of America

  • Tariq Siddiqui,

    Affiliations Department of Medicine, Veterans Affairs Maryland Health Care System, Baltimore, Maryland, United States of America, Department of Medicine, University of Maryland, Baltimore, Maryland, United States of America

  • Adi Gundlapalli,

    Affiliations Department of Medicine, Veterans Affairs Salt Lake City Health Care System, Salt Lake City, Utah, United States of America, Department of Medicine, University of Utah, Salt Lake City, Utah, United States of America

  • Matthew Samore,

    Affiliations Department of Medicine, Veterans Affairs Salt Lake City Health Care System, Salt Lake City, Utah, United States of America, Department of Medicine, University of Utah, Salt Lake City, Utah, United States of America

  • Leonard D'Avolio

    Affiliations Massachusetts Veterans Epidemiology Research and Information Center, Veterans Affairs Boston Health Care System, Jamaica Plain, Massachusetts, United States of America, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, United States of America

Abstract

Background

Timely information about disease severity can be central to the detection and management of outbreaks of acute respiratory infections (ARI), including influenza. We asked if two resources: 1) free text, and 2) structured data from an electronic medical record (EMR) could complement each other to identify patients with pneumonia, an ARI severity landmark.

Methods

A manual EMR review of 2747 outpatient ARI visits with associated chest imaging identified x-ray reports that could support the diagnosis of pneumonia (kappa score  = 0.88 (95% CI 0.82∶0.93)), along with attendant cases with Possible Pneumonia (adds either cough, sputum, fever/chills/night sweats, dyspnea or pleuritic chest pain) or with Pneumonia-in-Plan (adds pneumonia stated as a likely diagnosis by the provider). The x-ray reports served as a reference to develop a text classifier using machine-learning software that did not require custom coding. To identify pneumonia cases, the classifier was combined with EMR-based structured data and with text analyses aimed at ARI symptoms in clinical notes.

Results

370 reference cases with Possible Pneumonia and 250 with Pneumonia-in-Plan were identified. The x-ray report text classifier increased the positive predictive value of otherwise identical EMR-based case-detection algorithms by 20–70%, while retaining sensitivities of 58–75%. These performance gains were independent of the case definitions and of whether patients were admitted to the hospital or sent home. Text analyses seeking ARI symptoms in clinical notes did not add further value.

Conclusion

Specialized software development is not required for automated text analyses to help identify pneumonia patients. These results begin to map an efficient, replicable strategy through which EMR data can be used to stratify ARI severity.

Introduction

Effective responses to epidemics of infectious diseases hinge not only on early outbreak detection, but also on an ongoing assessment of disease severity. Indeed, the proportion of infected patients who develop severe illness often governs public perception and is a key factor in deciding whether or not to trigger interventions that can cause harm and exact significant social and financial costs.

For surveillance systems aimed at epidemics of acute respiratory infections (ARI), the rationale for incorporating information about disease severity is particularly compelling: 1) doing so could help discover outbreaks that involve only a small number of very sick patients, such as what initially occurred with SARS [1] or what could be anticipated shortly after a criminal release of plague [2] or tularemia [3]; 2) such systems could help adjust ongoing responses to seasonal or pandemic influenza, where severity can vary by orders of magnitude between epidemics [4] or even between waves of the same epidemic [5], [6]. To be useful, information about ARI severity needs to be both timely and specific [7], [8]. Current methods of monitoring influenza-related hospitalizations or deaths fall short of meeting these requirements [9].

Electronic medical records (EMR) are fast becoming commonplace, and form a rich source of information that could be secondarily used for surveillance purposes. In the past, we initiated a project to unravel how EMR data could be combined to identify outpatients with ARI [10]. In this work, we sought to develop case-detection algorithms (CDA) aimed at pneumonia, a key landmark in the severity spectrum of ARI. In particular, we asked how information retrieved from the free-text of chest imaging reports and clinical notes could complement structured data to uncover pneumonia cases.

Methods

Ethics Statement

This study was approved by the Institutional Review Boards at the University of Maryland and the VA Maryland Health Care System. Research-related risks were limited to maintaining the confidentiality of data generated during routine patient care. A waiver of consent was granted because the research-related risks were minimal and did not adversely affect the rights and welfare of the participants, and because the work would not have otherwise been feasible, given the large number of participants.

Participants

We applied a previously validated ARI case-detection algorithm (CDA) [10] to EMR-derived information related to outpatient visits at the Veterans Administration Maryland Health Care System, from January 1, 2004 through December 31, 2006. This ARI CDA was chosen as a screening tool because it identifies 99% of outpatients that satisfied a broad definition of ARI: positive respiratory virus culture/antigen OR any two of the following symptoms, of no more than 7 days duration: a) cough; b) fever or chills or night sweats; c) pleuritic chest pain; d) myalgia; e) sore throat; f) headache AND illness not attributable to a non-infectious etiology [10]. The ARI CDA flagged an outpatient visit if the provider assigned it an ARI-related International Disease Classification, 9th Revision, Clinical Modification (ICD-9) diagnostic code OR issued a prescription for a cough remedy OR documented at least two symptoms from the above ARI case definition in his/her clinical note, as retrieved by computerized text analysis [10]. Visits flagged by the ARI CDA were included if chest imaging was obtained within 24 hours of clinic registration time. Participants were sampled only once, at their first eligible visit during the study period.

The methods to validate the performance of selected pneumonia CDA on a separate population are described in the next section.

Description of Procedures

Reference chest imaging report review.

A pulmonary disease physician read all eligible chest imaging reports (n = 2,861 in 2747 unique patients). Reports were labeled “Negative” if they did not support the diagnosis of pneumonia. This category included all images within normal limits or showing no evidence of active pulmonary disease. Reports with comments on shrapnel or bullet fragments, pleural plaques or other abnormalities outside the lung parenchyma, calcified granulomas, old nodules, scars or chronic emphysematous changes were put in this category. Reports were labeled “Non-Negative” if they could possibly support the diagnosis of pneumonia. These reports described a wide range of abnormalities, from ill-defined densities where the diagnosis of pneumonia could not be excluded, to frank infiltrates characteristic of pneumonia. All “Non-Negative” reports and a 10% sample of the “Negative” reports were blindly reviewed by a second pulmonary physician (n = 537). Kappa score between the two independent reviewers was 0.88 (95% CI 0.82:0.93). “Non-negative” reports containing wording typically used to describe abnormalities indicative of pneumonia were also flagged and used as an alternative training set in the development of the automated imaging report classifier (see below).

Reference clinical record review.

Reference cases with pneumonia were identified by manually reviewing all EMR entries made during the calendar day of index visits that corresponded to the reference, manually reviewed, “Non-Negative” chest imaging reports outlined above. Symptoms and diagnostic impressions were abstracted by a pulmonary physician, entered into a data collection instrument (MS Access, Microsoft Corp., Redmond WA) and recombined into two case definitions: 1) “Possible Pneumonia”: non-negative chest imaging report AND at least one of the following symptoms, new or changed within the last 7 days: a) cough; b) sputum; c) fever or chills or night sweats; d) dyspnea; e) pleuritic chest pain AND illness not clearly attributable to a non-infectious etiology; 2) “Pneumonia-in-Plan”: a non-negative chest imaging report AND pneumonia listed as one of the top two diagnostic possibilities in a physician's or nurse practitioner's note. Cases with Possible Pneumonia or Pneumonia-in-Plan were labeled “Admitted” if they gained admission to the hospital within 48 hours of index visit registration. Otherwise, they were labeled “Outpatient”.

Development of chest imaging report classifier.

We used open-source automated software that couples a clinical NLP pipeline (Clinical Text Analysis and Knowledge Extraction System (cTAKES) [11]) with an implementation of a conditional random fields probabilistic classifier [12] to develop the text analyses that could separate non-negative from negative chest imaging reports (Automated Retrieval Console (ARC) software, v.2.0 [13], [14]). In a preliminary effort to improve the performance of the classifier, the reference imaging reports were presented for machine-learning as four alternative training sets where: a) the text of the reports was fed either whole or scrubbed from the characters preceding the string “Impression” when the latter was found; b) targeted reports were either all of the non-negative reports (n = 450) or only those that described abnormalities typical for a pneumonia (n = 316). Text classification models with the highest F-measure were retained for each training set. The four retained models were then separately combined with other EMR-derived data and performance of the resulting CDAs at identifying patients that fitted our case definition compared (see next paragraph). The text classification models trained with reports that contained typical pneumonia descriptions and whose text was restricted to the “Impression” field led to the best performing pneumonia CDAs, and were those used for this report.

Development of pneumonia case-detection algorithms.

Data were extracted from the Veterans Integrated Service Technology Architecture (VistA) repository and transferred to a Structured Query Language (SQL) database (Mumps Data Extractor, Strategic Reporting Systems Inc., Peabody, MA). Subsequent data transformations and queries were implemented using SQL Server 2008 (v. 10.0, Microsoft Corp., Redmond, WA). Candidate components for CDAs included those previously found useful to identify patients with ARI: ARI-related ICD-9 codes (labeled as “ARI ICD-9 codes”), cough remedies [10], and clinical notes identified as positive for ARI symptoms by text analysis [10] (“Text of Clinical Notes”). We also considered the following CDA components, when related to the index outpatient visit: 1) a subset of the ARI ICD-9 codes whose narrative included the string “pneumonia” (“Pneumonia ICD-9 codes”: 480–483, 485–487); 2) a new prescription for antibiotics of a class of commonly used to treat pneumonia (cephalosporins, fluoroquinolones, macrolides, penicillins); 3) admission to the hospital, for any reason, within 48 hours of the index outpatient visit (“(Not) Admitted to Hospital”); 4) chest imaging performed (“Imaging Obtained”); 5) whether at least one chest imaging report related to the index visit was labeled “non-negative” by the automated text classifier described above (“Text of Imaging Reports”).

Performance measures.

The performance of the pneumonia CDAs was summarized with standard test descriptors (sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F-measure (2 * PPV * Sensitivity/(PPV + Sensitivity)). Denominators used to calculate these tests were either the whole study population (n = 2747), those patients who were hospitalized for any reason following their index visit (n = 602) or those who were not (n = 2145).

Validation of selected CDAs.

The ARI CDA and imaging report classifier were applied to EMR-derived databases for a 5-year period anterior to the original study period i.e. 1/1/2007–12/31/2011. A random, 50% sample of the visits flagged by the [ARI CDA AND Text of Imaging Reports] query were manually reviewed. Cases identified served as the reference to validate the PPV of selected pneumonia CDAs.

Results

Study Population

The ARI CDA flagged 22,960 first visits from unique patients during the algorithm development phase of the study period. Of these, 2,747 were associated with at least one report for chest imaging performed within 24 hours of check-in time. The study population was 93% male, older (61±15 years old, mean ± standard deviation) and 52.6% African American (Table 1).

Reference Pneumonia Cases

A manual review of EMR entries on the day of the 2,747 index visits identified 380 cases that satisfied at least one pneumonia case definition, 370 with Possible Pneumonia and 250 with Pneumonia-in-Plan. Most patients with a Pneumonia-in-Plan also had Possible Pneumonia (240/250), including nearly all (124/127) patients admitted to the hospital. Patients who satisfied either case definitions were therefore merged into a common target group for the development of the “Admitted Pneumonia” CDAs. Ninety percent of all index visits occurred in urgent/same day care settings.

Patients with Possible Pneumonia and Pneumonia-in-Plan had similar demographics (Table 1) and symptoms and signs (Table 2), with the possible exception that the latter population had more febrile symptoms. Compared with their outpatient counterparts, Admitted Pneumonia patients were overrepresented in the older age groups (71–90 years old, Table 1) and appeared to have more dyspnea, fever-related symptoms, and clinical signs of lung consolidation (Table 2).

Pneumonia CDAs That Used Structured EMR Entries Only

The composition and performance of illustrative CDAs for cases with Possible Pneumonia or Pneumonia-in-Plan are shown for all locations of care in Table 3, and for those cases that remained outpatients or were admitted (Tables 4 and 5, respectively). Structured EMR information ipso facto included as components of the relevant CDAs included: 1) that chest imaging was obtained (“Imaging Obtained”, Tables 35); 2) whether or not a case was admitted to the hospital (“(Not) Admitted”, Tables 45).

thumbnail
Table 4. CDA Performance: Outpatients who were not admitted to the hospital.

https://doi.org/10.1371/journal.pone.0070944.t004

thumbnail
Table 5. CDA Performance: Outpatients subsequently admitted to the hospital.

https://doi.org/10.1371/journal.pone.0070944.t005

An ICD-9 code set for pneumonia diagnoses (“Pneumonia ICD-9 Codes”, Tables 35) helped identify pneumonia with PPVs of 52.8–55.3% but had limited sensitivity (28.5–56%, CDAs 1, 8, 15, 22, and 29, Tables 35), even when providers had indicated that pneumonia was a likely diagnosis in their clinical notes i.e. in Pneumonia-in-Plan or Admitted Pneumonia cases (CDAs 8, 22 and 29, Tables 35). A broadly inclusive ARI ICD-9 code set (“ARI ICD-9 Codes”, Tables 35) increased detection sensitivity to 86–97%, but degraded PPV (6.3–23.5%) and overall performance, as reflected by lower F-measures (compare CDA 3 to 1, 10 to 8 in Table 3, 17 to 15, 24 to 22 in Table 4, and 31 to 29 in Table 5).

CDAs that did not include ICD-9 diagnostic codes were not among the most successful (data not shown). Prescriptions for medications aimed at ARI symptoms and various groupings of antibiotics that could be used to treat bacterial pneumonias did not add value (data not shown).

Pneumonia CDAs That Combined Structured with Free-Text EMR Entries

We retrieved information from free-text EMR entries according to two different strategies. In the first strategy, text analysis routines were used to search for ARI symptoms in the providers' clinical notes (“Text of Clinical Notes”, Tables 35). Coupling positive results of Text of Clinical Notes analyses to ARI ICD-9 codes using an OR logical operand increased detection sensitivity over otherwise comparable CDAs. However, specificity and PPV decreased and overall performance either did not improve or worsened (compare CDA 6 to 3 and 13 to 10, Table 3; CDA 20 to 17 and 27 to 24, Table 4; CDA 34 to 31 and 35 to 33, Table 5). Coupling the Text of Clinical Notes analysis to ARI ICD-9 codes using an AND logical operand further increased PPV, but severely reduced sensitivities and overall performance (CDA 4, 11, 18, 25 and 32, Tables 35).

In the second strategy, text analysis was used to flag chest imaging reports that could support the diagnosis of pneumonia (“AND Text of Imaging Reports” component, Tables 35). Adding this component increased the PPV of otherwise identical CDAs by 23–70 absolute percentage points (compare CDA 2 to 1, 5 to 3, 7 to 6 and so on, Tables 35). Despite attendant losses in sensitivity, results from the “Text of Imaging Reports” classifier increased the F-measure of all CDAs that included the broad ARI ICD-9 code set. With the possible exception is CDA 7, whose F-measure was the highest achieved in this study, the OR Text of Clinical Notes component did not add further value to CDAs that already included analyses of the chest imaging reports (compare CDA 7 to 5 and 14 to 12, Table 3; CDA 21 to 19 and 28 to 26, Table 4; CDA 35 to 33, Table 5).

Performance Validation

The [ARI CDA AND “Text of Imaging Report”] algorithm flagged 553 visits over the 1/2007–12/2011 validation time period. Of the 276 visits manually reviewed, there were 212 cases with Possible Pneumonia, 154 with Pneumonia-in-Plan and 147 with both diagnoses. These reference cases were used to measure the PPV of CDAs that included the “Text of Imaging Report” CDA component (“PPV Validation” performance measure, Tables 35). For the whole population, PPVs decreased 5.3–18% absolute percentages points compared to values obtained in the development phase of the study (compare PPV vs. PPV Validation performance measures for CDAs 2, 4, 5, 7, Table 3). For the best performing CDAs, PPVs remained within 5.3–7.5% of our original findings, independent of case definition and admission disposition (CDAs 5, 7, 12, 14, 19, 21, 26, 28, 33, 35, Tables 35). The added imprecision occurred mostly in the outpatient arena (CDA 19, 21, 26, 28, Table 4) and was in large part due to flagging of follow-up rather than initial pneumonia visits (data not shown). PPVs actually increased for patients admitted to the hospital (CDA 33, 35, Table 5).

Discussion

Automated text analyses of chest imaging reports improved the performance of EMR-based CDAs that included structured data elements and free-text search for ARI symptoms. This contribution persisted across pneumonia case definitions, applied to outpatients and hospitalized patients alike, and helped CDAs reach precisions of 64–86% while maintaining sensitivities of 58–75%. These data support our working hypothesis that selected free text analyses can supplement structured EMR data to assess the severity of ARI outbreaks.

This work benefits from prior efforts to combine EMR data to identify patients with ARI. The ARI CDA used as an initial screen for the current study had been developed and validated against a population-based sample of over 15,000 EMR records, where it recognized 99% of cases that satisfied a broad definition of ARI [10]. This screening algorithm forms a practical starting point for an EMR data flow intent on monitoring the incidence and severity of ARIs, and is likely to have flagged most symptomatic pneumonia patients.

Pneumonia is seldom a definitive diagnosis, even when histological information is available [15]. Absent a standard, we sought clinically acceptable case definitions that could be reliably abstracted from clinical records. As is both customary and recommended by treatment guidelines [16][19], our case definitions required supportive chest imaging. To this common imaging requirement, the Possible-Pneumonia definition added clinical symptoms whereas Pneumonia-in-Plan relied solely on the provider's final diagnostic assessment. Despite these differences, more than 95% of patients with Pneumonia-in-Plan also satisfied the more permissive Possible Pneumonia definition in both our development and validation reference populations, indicating that the two definitions addressed related clinical conditions. Given that independent EMR abstractors could identify respiratory symptoms [10], pneumonia diagnostic impressions and supportive chest imaging with a high degree of agreement, our data suggest that the Possible Pneumonia and the Pneumonia-in-Plan case definitions can serve as useful tools to reproducibly retrieve pneumonia-related information from an EMR.

Prior attempts to automatically identify pneumonia patients through medical records have concentrated on diagnostic codes assigned after hospital discharge. Discharge codes have been found to be good markers for hospitalized pneumonia patients, whether benchmarked against retrospective record reviews [20][22] or prospective data acquired for clinical trials [23][26]. Discharge codes, however, are of limited value in epidemic surveillance because they are untimely and do not distinguish between community- and hospital-acquired pneumonia [22]. In this study, we evaluated diagnostic codes assigned by providers at the conclusion of outpatient visits, as is practiced at the Veterans Administration health system. We found these codes to represent a key component of pneumonia detection, even if they proved less accurate at finding pneumonia patients who were sent home rather than hospitalized [27]. While the utility of diagnostic codes vary when they are assigned by third parties or have reimbursement repercussions, our results nevertheless provide an impetus for diagnostic codes to be made available as soon as possible following outpatient services, so that they can be used for surveillance, decision support and quality control.

The chest imaging report has long been recognized as a fruitful context in which to mine for evidence of pneumonia. Over the last 20 years, various combinations of approaches, including natural language processing [28][31], expert rules [32], [33], Bayesian [32], [34] or neural networks [35] and machine-learning [33], have held their own compared to physicians for their ability to find pneumonia-related concepts in report narratives. Imaging report analyses have been compared to discharge diagnostic codes [36], [37], but have seldom been evaluated for their added value against a broader reference standard for clinical pneumonia [38][40]. To our knowledge, only one previous publication used imaging report analyses to detect outpatients with community-acquired pneumonia [40]. Besides bolstering the evidence for the utility of these text analyses, our data illustrate the importance of targeting them properly: in the course of this study, classifying 26,581 imaging reports did more to improve detection performance than extracting ARI symptoms from almost 14 million clinical notes. Although an assessment of the significance of the performance gained through imaging report text analysis must await purpose-specific evaluations, our data nevertheless support the notion that a generalized machine learning approach can perform well across information retrieval tasks [13], [14]. Also significant, in our view, is the ease with which we could develop the classifier. Clinical users focused on the document-level classification needed to create the reference training set. Once the latter was fed to the ARC software, model development required little further user interaction, and there was no need for custom programming. Such an efficient workflow makes it possible to quickly rebuild the classifier elsewhere, should it proves less robust than our validation exercise suggests.

Our study is subject to limitations beyond those already mentioned. First, we did not evaluate CDA components that have been associated with pneumonia in the past such as abnormalities in vital signs [41], white blood cell count [42] or oxygenation [41], and microbiological results. While these data elements could be missing in some patients [43], they could provide an opportunity to further improve detection performance. Second, our work was performed in a health system whose population and health care practices may not be generalizable. Even if diffusion of our approaches was initially restricted to VA institutions, at least some automated pneumonia surveillance could nevertheless be deployed across all 50 states. Third, sampling was not random but instead based on a screening algorithm. While this algorithm has been validated using a random, population-based sample, our study sample remains subject to verification bias [44] such as the systematic exclusion of pneumonia patients for whom chest imaging was not obtained [45]. Fourth, the retrospective nature of the record review coupled with shortcomings of clinical acumen and chest imaging [46] imply that we may have missed pneumonia patients whose symptoms, signs or imaging abnormalities were absent [46], [47], missed, atypical, inadequately documented or miscoded [23]. Despite these potential failings, our results do reflect information committed to a real-world EMR, and thus represent a realistic environment in which to compare the relative performance of alternative detection approaches.

In summary, our results indicate that an EMR-based approach that couples queries of structured data with text analysis of imaging reports can be used to assess disease severity in outpatients with ARI. By identifying high-performing yet parsimonious CDAs that could be replicated without creating customized software, our results begin to map an efficient strategy by which pneumonia surveillance could be more widely implemented.

Acknowledgments

The authors would like to thank Robert Sawyer MD for technical help with the institutional databases.

Author Contributions

Conceived and designed the experiments: SD AG MS LD. Performed the experiments: BK JD TS. Analyzed the data: SD TS. Contributed reagents/materials/analysis tools: SD LD. Wrote the paper: SD AG MS LD.

References

  1. 1. Peiris JS, Guan Y, Yuen KY (2004) Severe acute respiratory syndrome. Nat Med 10: S88–97.
  2. 2. Inglesby TV, Dennis DT, Henderson DA, Bartlett JG, Ascher MS, et al. (2000) Plague as a biological weapon: medical and public health management. Working Group on Civilian Biodefense. JAMA 283: 2281–2290.
  3. 3. Dennis DT, Inglesby TV, Henderson DA, Bartlett JG, Ascher MS, et al. (2001) Tularemia as a biological weapon: medical and public health management. JAMA 285: 2763–2773.
  4. 4. Miller MA, Viboud C, Olson DR, Grais RF, Rabaa MA, et al. (2008) Prioritization of influenza pandemic vaccination to minimize years of life lost. J Inf Dis 198: 305–311.
  5. 5. Andreasen V, Viboud C, Simonsen L (2008) Epidemiologic characterization of the 1918 influenza pandemic summer wave in Copenhagen: implications for pandemic control strategies. J Inf Dis 197: 270–278.
  6. 6. Viboud C, Grais RF, Lafont BA, Miller MA, Simonsen L (2005) Multinational impact of the 1968 Hong Kong influenza pandemic: evidence for a smoldering pandemic. J Inf Dis 192: 233–248.
  7. 7. Lipsitch M, Hayden FG, Cowling BJ, Leung GM (2009) How to maintain surveillance for novel influenza A H1N1 when there are too many cases to count. Lancet 374: 1209–1211.
  8. 8. Lipsitch M, Riley S, Cauchemez S, Ghani AC, Ferguson NM (2009) Managing and reducing uncertainty in an emerging influenza pandemic. N Engl J Med 361: 112–115.
  9. 9. Thompson WW, Weintraub E, Dhankhar P, Cheng PY, Brammer L, et al. (2009) Estimates of US influenza-associated deaths made using four different methods. Influenza Other Respi Viruses 3: 37–49.
  10. 10. DeLisle S, South B, Anthony JA, Kalp E, Gundlapallli A, et al. (2010) Combining free text and structured electronic medical record entries to detect acute respiratory infections. PLoS ONE 5: e13377.
  11. 11. Savova G (2008) UIMA-based clinical information system. Language Resources and Evaluation Conference. Morocco.
  12. 12. Lafferty J, McCallum A, Pereira F (2001) Conditional random fields: probabilistic models for segmenting and labeling sequence data. International Conference on Machine Learning. San Francisco: Morgan Kaufman. 282–289.
  13. 13. D'Avolio LW, Nguyen TM, Farwell WR, Chen Y, Fitzmeyer F, et al. (2010) Evaluation of a generalizable approach to clinical information retrieval using the automated retrieval console (ARC). J Am Med Inform Assoc 17: 375–382.
  14. 14. D'Avolio LW, Nguyen TM, Goryachev S, Fiore LD (2011) Automated concept-level information extraction to reduce the need for custom software and rules development. J Am Med Inform Assoc 18: 607–613.
  15. 15. Corley DE, Kirtland SH, Winterbauer RH, Hammar SP, Dail DH, et al. (1997) Reproducibility of the histologic diagnosis of pneumonia among a panel of four pathologists: analysis of a gold standard. Chest 112: 458–465.
  16. 16. Fine MJ, Auble TE, Yealy DM, Hanusa BH, Weissfeld LA, et al. (1997) A prediction rule to identify low-risk patients with community-acquired pneumonia. N Engl J Med 336: 243–250.
  17. 17. Niederman MS, Mandell LA, Anzueto A, Bass JB, Broughton WA, et al. (2001) Guidelines for the management of adults with community-acquired pneumonia. Diagnosis, assessment of severity, antimicrobial therapy, and prevention. Am J Respir Crit Care Med 163: 1730–1754.
  18. 18. Bartlett JG, Dowell SF, Mandell LA, File Jr TM, Musher DM, et al. (2000) Practice guidelines for the management of community-acquired pneumonia in adults. Infectious Diseases Society of America. Clin Infect Dis 31: 347–382.
  19. 19. Mandell LA, Wunderink RG, Anzueto A, Bartlett JG, Campbell GD, et al. (2007) Infectious Diseases Society of America/American Thoracic Society consensus guidelines on the management of community-acquired pneumonia in adults. Clin Infect Dis 44 Suppl 2S27–72.
  20. 20. Whittle J, Fine MJ, Joyce DZ, Lave JR, Young WW, et al. (1997) Community-acquired pneumonia: can it be defined with claims data? Am J Med Qual 12: 187–193.
  21. 21. Skull SA, Andrews RM, Byrnes GB, Campbell DA, Nolan TM, et al. (2008) ICD-10 codes are a valid tool for identification of pneumonia in hospitalized patients aged > or  = 65 years. Epidemiol Infect 136: 232–240.
  22. 22. Yu O, Nelson JC, Bounds L, Jackson LA (2011) Classification algorithms to improve the accuracy of identifying patients hospitalized with community-acquired pneumonia using administrative data. Epidemiol Infect 139: 1296–1306.
  23. 23. Marrie TJ, Durant H, Sealy E (1987) Pneumonia–the quality of medical records data. Med Care 25: 20–24.
  24. 24. Guevara RE, Butler JC, Marston BJ, Plouffe JF, File TM Jr, et al. (1999) Accuracy of ICD-9-CM codes in detecting community-acquired pneumococcal pneumonia for incidence and vaccine efficacy studies. Am J Epidemiol 149: 282–289.
  25. 25. Aronsky D, Haug PJ, Lagor C, Dean NC (2005) Accuracy of administrative data for identifying patients with pneumonia. Am J Med Qual 20: 319–328.
  26. 26. van de Garde EM, Oosterheert JJ, Bonten M, Kaplan RC, Leufkens HG (2007) International classification of diseases codes showed modest sensitivity for detecting community-acquired pneumonia. J Clin Epidemiol 60: 834–838.
  27. 27. Niederman MS, McCombs JS, Unger AN, Kumar A, Popovian R (1998) The cost of treating community-acquired pneumonia. Clin Ther 20: 820–837.
  28. 28. Friedman C, Alderson PO, Austin JH, Cimino JJ, Johnson SB (1994) A general natural-language text processor for clinical radiology. J Am Med Inform Assoc 1: 161–174.
  29. 29. Hripcsak G, Friedman C, Alderson PO, DuMouchel W, Johnson SB, et al. (1995) Unlocking clinical data from narrative reports: a study of natural language processing. Ann Intern Med 122: 681–688.
  30. 30. Hripcsak G, Austin JH, Alderson PO, Friedman C (2002) Use of natural language processing to translate clinical information from a database of 889,921 chest radiographic reports. Radiology 224: 157–163.
  31. 31. Fiszman M, Chapman WW, Aronsky D, Evans RS, Haug PJ (2000) Automatic detection of acute bacterial pneumonia from chest X-ray reports. J Am Med Inform Assoc 7: 593–604.
  32. 32. Chapman WW, Fizman M, Chapman BE, Haug PJ (2001) A comparison of classification algorithms to automatically identify chest X-ray reports that support pneumonia. J Biomed Inform 34: 4–14.
  33. 33. Wilcox AB, Hripcsak G (2003) The role of domain knowledge in automating medical text report classification. J Am Med Inform Assoc 10: 330–338.
  34. 34. Chapman WW, Haug PJ (1998) Bayesian modeling for linking causally related observations in chest X-ray reports. Proceedings./AMIA Annual Symposium AMIA Symposium: 587–591.
  35. 35. Lagor C, Aronsky D, Fiszman M, Haug PJ (2001) Automatic identification of patients eligible for a pneumonia guideline: comparing the diagnostic accuracy of two decision support models. Stud Health Technol Inform 84: 493–497.
  36. 36. Hripcsak G, Knirsch C, Zhou L, Wilcox A, Melton GB (2007) Using discordance to improve classification in narrative clinical databases: an application to community-acquired pneumonia. Comput Biol Med 37: 296–304.
  37. 37. Asatryan A, Benoit S, Ma H, English R, Elkin P, et al. (2011) Detection of pneumonia using free-text radiology reports in the BioSense system. Int J Med Inform 80: 67–73.
  38. 38. Haas JP, Mendonca EA, Ross B, Friedman C, Larson E (2005) Use of computerized surveillance to detect nosocomial pneumonia in neonatal intensive care unit patients. Am J Infect Control 33: 439–443.
  39. 39. Mendonca EA, Haas J, Shagina L, Larson E, Friedman C (2005) Extracting information on pneumonia in infants using natural language processing of radiology reports. J Biomed Inform 38: 314–321.
  40. 40. Aronsky D, Fiszman M, Chapman WW, Haug PJ (2001) Combining decision support methodologies to diagnose pneumonia. Proceedings/AMIA Annual Symposium AMIA Symposium: 12–16.
  41. 41. Nolt BR, Gonzales R, Maselli J, Aagaard E, Camargo CA Jr, et al. (2007) Vital-sign abnormalities as predictors of pneumonia in adults with acute cough illness. The American journal of emergency medicine 25: 631–636.
  42. 42. Melbye H, Straume B, Brox J (1992) Laboratory tests for pneumonia in general practice: the diagnostic values depend on the duration of illness. Scand J Prim Health Care 10: 234–240.
  43. 43. Fine MJ, Stone RA, Singer DE, Coley CM, Marrie TJ, et al. (1999) Processes and outcomes of care for patients with community-acquired pneumonia: results from the Pneumonia Patient Outcomes Research Team (PORT) cohort study. Arch Intern Med 159: 970–980.
  44. 44. Begg CB, Greenes RA (1983) Assessment of diagnostic tests when disease verification is subject to selection bias. Biometrics 39: 207–215.
  45. 45. Metlay JP, Fine MJ (2003) Testing strategies in the initial management of patients with community-acquired pneumonia. Ann Intern Med 138: 109–118.
  46. 46. Syrjala H, Broas M, Suramo I, Ojala A, Lahde S (1998) High-resolution computed tomography for the diagnosis of community-acquired pneumonia. Clin Infect Dis 27: 358–363.
  47. 47. Metlay JP, Schulz R, Li YH, Singer DE, Marrie TJ, et al. (1997) Influence of age on symptoms at presentation in patients with community-acquired pneumonia. Arch Intern Med 157: 1453–1459.