Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Systematic review of emergency medicine clinical practice guidelines: Implications for research and policy

  • Arjun K. Venkatesh ,

    arjun.venkatesh@yale.edu

    Affiliations Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States of America, Yale New Haven Hospital Center for Outcomes Research and Evaluation, New Haven, CT, United States of America

  • Dan Savage,

    Affiliation Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States of America

  • Benjamin Sandefur,

    Affiliation Department of Emergency Medicine, Mayo Clinic, Rochester, MN, United States of America

  • Kenneth R. Bernard,

    Affiliation Brigham and Women’s Hospital-Massachusetts General Hospital-Harvard Affiliated Emergency Medicine Residency, Boston, MA, United States of America

  • Craig Rothenberg,

    Affiliation Department of Emergency Medicine, Yale University School of Medicine, New Haven, CT, United States of America

  • Jeremiah D. Schuur

    Affiliation Department of Emergency Medicine, Brigham and Women’s Hospital, Boston, MA, United States of America

Abstract

Introduction

Over 25 years, emergency medicine in the United States has amassed a large evidence base that has been systematically assessed and interpreted through ACEP Clinical Policies. While not previously studied in emergency medicine, prior work has shown that nearly half of all recommendations in medical specialty practice guidelines may be based on limited or inconclusive evidence. We sought to describe the proportion of clinical practice guideline recommendations in Emergency Medicine that are based upon expert opinion and low level evidence.

Methods

Systematic review of clinical practice guidelines (Clinical Policies) published by the American College of Emergency Physicians from January 1990 to January 2016. Standardized data were abstracted from each Clinical Policy including the number and level of recommendations as well as the reported class of evidence. Primary outcomes were the proportion of Level C equivalent recommendations and Class III equivalent evidence. The primary analysis was limited to current Clinical Policies, while secondary analysis included all Clinical Policies.

Results

A total of 54 Clinical Policies including 421 recommendations and 2801 cited references, with an average of 7.8 recommendations and 52 references per guideline were included. Of 19 current Clinical Policies, 13 of 141 (9.2%) recommendations were Level A, 57 (40.4%) Level B, and 71 (50.4%) Level C. Of 845 references in current Clinical Policies, 67 (7.9%) were Class I, 272 (32.3%) Class II, and 506 (59.9%) Class III equivalent. Among all Clinical Policies, 200 (47.5%) recommendations were Level C equivalent, and 1371 (48.9%) of references were Class III equivalent.

Conclusions

Emergency medicine clinical practice guidelines are largely based on lower classes of evidence and a majority of recommendations are expert opinion based. Emergency medicine appears to suffer from an evidence gap that should be prioritized in the national research agenda and considered by policymakers prior to developing future quality standards.

Introduction

Over the last fifty years medicine has increasingly moved away from anecdotal replication of practice patterns taught during training and embraced evidence-based medicine. This transition is most strongly embodied by the widespread publication and dissemination of clinical practice guidelines by medical specialty societies, health care institutions, and governmental bodies.[1] Such guidelines aim to advance the quality of healthcare delivery by summarizing the best available evidence in order to accelerate knowledge translation and reduce variations in practice.[2, 3]

The first Emergency medicine clinical practice guidelines were developed by the American College of Emergency Physicians (ACEP) in 1990 for the management of patients with chest pain.[4] While emergency medicine clinical practice guidelines, which have primarily been authored by the American College of Emergency Physicians (ACEP) as ACEP Clinical Policies, have generated substantial discussion and debate over the years [5], these clinical policies serve as the foundation of educational programs, the development of quality measures and as the basis for research and advocacy efforts.

While the development process used for clinical practice guideline development is designed to reflect the highest quality of available evidence for clinical scenarios, prior evaluations of clinical guidelines in cardiology, obstetrics and gynecology and infectious disease found that many recommendations are based on lower-classes of evidence quality with the majority of recommendations based on expert opinion and low level evidence[68]. ACEP Clinical Policies have been designed to answer critical clinical questions in emergency medicine based on the systematic appraisal and interpretation of available evidence by an expert group. Often, however, Clinical Policies may be based on limited or inconclusive evidence. For example, the ACEP Clinical Policy recommending thrombolytics for hemodynamically unstable patients with pulmonary embolism or the recommendation that “pain response should not be used as the sole diagnostic indicator of the underlying etiology of an acute headache” [9] reflect this phenomena, respectively. Identifying gaps in clinical evidence underlying Clinical Policy recommendations is needed to define the future research agenda and inform policymakers seeking to develop measures of emergency care quality. To date, despite attempts to strengthen the ACEP Clinical Policy development process and growth in emergency care research funding and output, the strength of evidence that supports recommendations within emergency medicine promulgated clinical guidelines is not known.[1013]

Accordingly, we sought to describe the proportion of Clinical Policy recommendations that are based upon expert opinion and low level evidence, and to describe the classes of evidence supporting these ACEP Clinical Policies. We also sought to examine trends in the emergency care evidence base by describing trends in both recommendations and evidence classification.

Methods

Study design

Systematic review of American College of Emergency Physicians clinical policies using PRISMA guidelines (S1 Appendix).

Selection of clinical practice guidelines

We included all American College of Emergency Physician Clinical Policies listed as both “current” and “past” from the ACEP Clinical & Practice Management website, http://www.acep.org/clinicalpolicies/. ACEP Clinical Policies are the only regularly published, medical specialty society sponsored clinical practice guidelines specific to emergency care in the United States. Each Clinical Policy is published as a peer-reviewed manuscript containing evidence-based recommendations designed to guide clinical practice. A Clinical Policy was defined as “current” if it was listed as such and available for download on the ACEP Clinical & Practice Management website as of October 20, 2015. A Clinical Policy was defined as “non-current” if it was listed as “past” on the ACEP Clinical & Practice Management website. We did not include clinical guidelines published by other professional organizations in emergency medicine, as they have neither a regular process nor a group charged with regular clinical guideline development and maintenance. We also did not include clinical guidelines published primarily by other organizations and co-signed or endorsed by ACEP as they followed a different review and writing process (Fig 1).

thumbnail
Fig 1. PRISMA diagram for systematic review of clinical policies.

https://doi.org/10.1371/journal.pone.0178456.g001

Data definitions

Each Clinical Policy was reviewed for three types of data elements using a standardized data collection tool in Microsoft Excel (Microsoft Corp., 2007, Redmond, WA): bibliographic; evidence-based; and recommendation-based data elements.

The bibliographic data collected included date published, the clinical category focus of the guideline (chief-complaint; disease-specific; procedure/intervention), current or non-current Clinical Policy status, whether it is a revision of the prior Clinical Policy, and whether other clinical societies (e.g. the American College of Cardiology) were involved in authorship.

The evidence-based data elements were abstracted from the bibliography of each Clinical Policy in which each reference is graded as Class I, II, or III (S1 Table). Each Clinical Policy includes a compendium of references in a single evidentiary table, so evidence grading can only be abstracted at the Clinical Policy level and not specific to each recommendation. References in the bibliography or evidentiary table that were not assigned a class of evidence were excluded. This was particularly applicable to the oldest Clinical Policies because evidence was not always explicitly documented between 1990 to 1995 with the first comprehensive classification of evidence appearing in Clinical Policies after 1996 [14]. Because the exact definitions of Class I, Class II, and Class III evidence evolved minimally, we created a single definition to standardize comparison that included older Clinical Policies. Class I equivalent evidence for therapeutic interventions constitutes randomized controlled trials or meta-analysis of randomized controlled trials, while Class I equivalent evidence for diagnostic interventions or prognostication constitutes prospective cohort studies or meta-analyses of the same. Class II equivalent evidence reflects nonrandomized trials for therapeutic interventions, or retrospective studies for diagnostic or prognostic actions. Class III equivalent evidence consists of case series, case reports, expert consensus, or other reviews. The current ACEP Clinical Policy literature classification schema can be found in S1 Table.

Individual Clinical Policy recommendations are defined as the set of strategies for which “medical literature exists to provide support for answers to critical questions” [15] faced by physicians working in emergency departments. For each recommendation we abstracted the type of recommendation (therapeutic, diagnostic, prognostic, or procedural), the level of strength of the recommendation (A, B, or C), and whether the recommendation was focused on adult or pediatric patients. In 1998, the scoring rubric for the three-tiered strength of recommendation was changed. To standardize the scoring and tabulation of strength of recommendations across all Clinical Policies, a single definition was used that aligned closely with current Clinical Policies. All Clinical Policies included in the primary analyses were rated using the current definitions without modification. S1 Text. Level A equivalent recommendations are those generally accepted principles for patient management that reflect a high degree of clinical certainty, meaning that they are based on Class I strength of evidence or overwhelming Class II evidence. Level B equivalent recommendations are those based on moderate clinical certainty, such as those based on Class II evidence including decision analysis that directly addresses the issue, or strong consensus of Class III studies. Level C equivalent recommendations were defined as those based on preliminary or inconclusive evidence, committee consensus, or limited-research-based evidence.

Data abstraction

Data were abstracted by three authors (BS, KB, DS) from the first ACEP Clinical Policy in 1990 until 2009 with perfect inter-rater reliability for the extracted data, except for the levels of recommendation for one Clinical Policy [16], which is no longer current, in which no distinction was made between level B or C recommendations. Given this ambiguity, a consensus was reached to classify all three recommendations (0.7% of all recommendations reviewed) in this Clinical Policy as level C equivalent as they were based on 18 studies rated as class II evidence, and 33 studies rated as class III evidence. A single author (DS) completed the data extraction for all CPGs published between January 2010 and January 2015 without double extraction given near perfect agreement during the initial data review.

Outcomes

The primary outcomes were the proportion of all current recommendations reported as level C equivalent and the proportion of all current evidence reported as class III equivalent.

Primary analysis.

We report descriptive statistics including the mean number of recommendations and mean number of references included within each Clinical Policy. We also report each outcome by year and use the Cochrane Armitage test for trend to evaluate the change in the proportion of Level C Recommendations and Class III Equivalent evidence over time. We also use chi-square tests to describe differences in the primary outcome between current and non-current Clinical Policies.

Secondary analysis.

Because many Clinical Policies provide recommendations for unique clinical indications are available in the literature or secondary clinical knowledge tools despite no longer being listed as “current” by ACEP, and because several earlier versions of Clinical Policies contain unique recommendations not included in future Clinical Policies, we also conducted a secondary analysis of all Clinical Policies in the dataset. For example, “Critical Issues in the Evaluation and Management of Adult Patients Presenting to the Emergency Department With Seizures” has been revised four times—2014[17], 2004[18], 1997[19], and 1993[20]. In the primary analysis, only the 2014 iteration is included in the analysis of current Clinical Policies. However, because the current 2014 Clinical Policy only addresses three of the six topics addressed in its 2004 version[18], all recommendations and graded evidence in the three prior versions are included in the secondary analysis of all Clinical Policies.

Results

Between January 1990 and January 2016 ACEP published 54 Clinical Policies at a mean rate of 2.3 Clinical Policies per year (range 0–4, median 2). These Clinical Policies cover 27 unique topics of which 9 have no published revision or are first version Clinical Policies and 45 are updates or expansions upon prior Clinical Policies. A total of 25 Clinical Policies are clinically focused on a chief complaint (e.g. headache), 21 on a specific disease (e.g. appendicitis), and 8 on a procedure or intervention (e.g. procedural sedation and analgesia).

Of all Clinical Policies, 19 (35%) were classified as current Clinical Policies, the oldest of which was published in 2003.[21] Policies were primarily authored by ACEP with several guidelines collaboratively developed with other specialty societies (14.3%).

Among the 19 current ACEP Clinical Policies, 14 were updates or revisions of prior guidelines and 5 represented the first version on a given topic. These current Clinical Policies contain 141 recommendations (range 2–13 per guideline, mean 7.4), of which 13 were Level A equivalent recommendations (9.2%), 57 Level B equivalent recommendations (40.4%), and 71 Level C Equivalent Recommendations (50.4%). In total the current clinical policies contained 845 graded pieces of evidence (mean 44.5, range: 6–119 per Clinical Policy). Sixty-seven pieces of evidence were Class I equivalent (7.9%), 272 were Class II equivalent (32.2%), and 506 were Class III Equivalent (59.9%).

Among the 14 current Clinical Policies that are the most recent revisions of 23 prior Clinical Policies, the proportion of level C equivalent recommendations increased from 36.3% in prior iterations to 44.7% in the current 14 Clinical Policies. Similarly, the proportion of Class III equivalent evidence also increased from 42.3% to 60.0%.

Of 421 total recommendations included in all 54 Clinical Policies in the dataset, 37 (8.8%) were Level A equivalent recommendations, 184 (43.7%) Level B equivalent recommendations, and 200 (47.5%) Level C equivalent recommendations.

The proportion of Level C Equivalent recommendations in current Clinical Policies was not significantly different than non-current Clinical Policies (47.2% vs 48.4%, p-value = 0.82) (Table 1). There was no statistically significant trend in the proportion of Level C recommendations in guidelines between 1996, the first year of structured grading of recommendations, and January 2016 (two-sided Cochrane Armitage trend test p = 0.30) (Table 2 and Fig 2).

thumbnail
Table 1. Distribution of recommendation level and evidence class in current and non-current clinical policies.

https://doi.org/10.1371/journal.pone.0178456.t001

thumbnail
Table 2. Distribution of recommendation levels and evidence class by year.

https://doi.org/10.1371/journal.pone.0178456.t002

A total of 2801 references were graded across all clinical policies, of which 278 (9.9%) were Class I equivalent, 1154 (41.2%%) were Class II equivalent, and 1369 (48.9%) were Class III equivalent. The proportion of Class III Equivalent evidence contained in current clinical policies was significantly higher than non-current Clinical Policies (44.2% vs 59.8%, p-value = <0.0001). In addition, the proportion of Class III evidence in clinical practice guidelines significantly increased between 1996 and 2015 (two-sided Cochrane Armitage trend test: p<0.0001).

Discussion

The development of clinical practice guidelines has become widespread across medical specialties including emergency medicine. We found that, like other specialties, close to half of current recommendations in emergency medicine clinical practice guidelines are based on expert opinion or lower classes of evidence rather than high quality clinical trials. We also found that clinical policies are increasingly likely to proportionally contain Level C recommendations and Class III evidence, including the subset of clinical policies that reflect updates and revisions to prior clinical policies.

These evidence gaps in current emergency care clinical practice guidelines reflect the design of the ACEP Clinical Policy development and reporting process as well as gaps in the emergency medicine evidence base. Some medical specialties develop clinical practice guidelines based on the existing evidence base in a manner consistent with several international consensus standards for clinical practice guideline development.[22] ACEP Clinical Polices, however, are developed based on a unique process: the policies answer a set of critical questions that are thought to be of key clinical importance and are defined prior to evidence review. As such, critical questions may not cover areas where there is definitive evidence, requiring consensus from content experts to address research equipoise. Such expert consensus may be of particular value when clinical recommendations are made for vulnerable populations such as the frail elderly or pregnant women that may be commonly excluded from clinical trials. Conversely, expert consensus may risk both codifying, or prolonging, anecdotal clinical practices that are based on limited evidence through the authority of a clinical practice guideline. In fact, on occasion prior work has shown that expert opinion may result in recommendations that depart from accepted practice and even recommend potentially harmful treatments despite contrary clinical evidence. [23, 24] The Clinical Policy reporting process could be improved to accommodate this need by developing a new, distinct recommendation Level for expert opinion recommendations that is distinct from the Level C used for lower quality empiric investigation as is proposed by the Oxford Center for Evidence Based Medicine and by the Institute of Medicine.[25, 26] A distinct designation for expert consensus recommendations would also permit the preliminary establishment of clinical standards and guidance prior to the publication of more definitive, higher classification research in the future.

A growing proportion of emergency care clinical practice guidelines are based on lower classes of evidence despite a growing body of emergency care research and publications. This evidence deficit may be permissible for recommendations that are unlikely to warrant the conduct of a randomized clinical trial such as the recommendation to obtain a blood glucose in seizure patients, which is consistent with good clinical practice but unlikely to ever meet the Level A threshold. On the other hand, clinical recommendations such as the use of both a parenteral benzodiazepine and haloperidol to produce more rapid sedation than monotherapy in the acutely agitated patient [27] or the proper timing to start an angiotensin-converting enzyme inhibitor in the initial management of heart failure [28] are well defined clinical scenarios that should be targets of controlled clinical trials worthy of Class I evidence designation. Acknowledging that Class I evidence to support all clinical practice in the emergency department is impractical, Federal agencies such as the National Institute of Health or the Agency for Healthcare Research and Quality should use critical appraisals of clinical practice guidelines such as this to identify topics of clinical importance and interest for which an evidence gap necessitates funding future investigation. Furthermore, researchers seeking to highlight the importance of research aims or to emphasize topics for investigation should look to gaps in Clinical Policies for such support.

In addition to research gaps, we also identified several opportunities to improve the quality of each Clinical Policy based on the recent recommendations of the IOM.[1] While our systematic review did not evaluate aspects of transparency, conflict of interest disclosure, writing group composite, or external review promoted by the IOM, our work did find that ACEP Clinical Policies broadly meet the rating and reporting guidance of the IOM. However, ACEP Clinical Policies, unlike other specialties societies such as the American College of Cardiology, do not meet the IOM recommendation to provide an evidence rating for each individual recommendation. The practice of only providing guideline-wide evidence summary tables may result in confusion as to the exact evidence base for a given recommendation—did a single observation study water down a recommendation in the face of limited prospective data, or did a single, small prospective clinical trial buoy up a recommendation? In addition, while fourteen of the 19 current Clinical Policies reflect updates or revision to prior Clinical Policies, we found that archived Clinical Policies contain many recommendations not subject to re-discussion or revision. As a result, some topics, or “Critical Questions,” still relevant to current practice may not undergo the regular re-evaluation recommended by the IOM. For example, in the Clinical Policies regarding adult patients with seizures, the 2004 version address the possible utility of various diagnostic evaluations such as serum glucose and sodium levels in patient with a first time seizure with no comorbidities who have returned to their baseline[18]; a topic no longer mentioned in the current 2014 Clinical Policy.[17] While embedding of prior recommendations in archived guidelines may be necessary due to the strict publication requirements of journals and specialty society reports, this practice may result in an underestimation of the strength of evidence that support several clinical practices and hinders the continued dissemination of important knowledge. Facing similar challenges as the speed and quantity of publications rapidly rises [29], the American Heart Association has developed a website that can be quickly updated to contain the most up to date recommendations. Such a process would result in clinical practice guidelines of higher relevance to practice and likely more clinician use at the bedside.[30, 31]

Our findings also carry several implications for policymakers. As formal criteria for the classes of evidence or requirements for clinical practice guidelines to support the endorsement or use of quality measures are adopted—the content and quality of Clinical Policies become of utmost importance for clinical standards used in new pay for performance programs[3234] Policymakers must recognize the wide variability in the strength of evidence underlying Clinical Policies when selecting new targets for quality measurement as well as responding to quality measures developed outside of emergency medicine but with relevance to emergency care. For example, the ACEP Clinical Policy, Critical issues in the evaluation and management of adult patients presenting to the emergency department with suspected pulmonary embolism, includes one Level B recommendation and two Level C recommendations that serve as the basis for the quality measure, “Appropriate Emergency Department Utilization of CT for Pulmonary Embolism”included in the new CMS approved Clinical Emergency Department Registry (CEDR) (25).[35] In contrast, after CMS proposed a quality measure for the utilization of head CT for headache, the ACEP Clinical Policy, Clinical Policy: Critical Issues In The Evaluation And Management Of Adult Patients Presenting To The Emergency Department With Acute Headache[9] was cited in an evaluation of the measure that subsequently led to the withdrawal of the measure due to lack of supporting evidence. [36] A recent survey of the Guidelines International Network indicates that emergency medicine is not alone as a specialty seeking better integration of clinical guideline and quality measure development[37], particularly given the greater attention given to the methodology of clinical practice guidelines development in comparison to the use of evidence of quality and accountability measurement. Future efforts to develop metrics that closely mirror the narrow specifications of higher-level recommendations based on higher classes of evidence will ensure that quality measures do not result in “distortions to care,” or unintended consequences such as overtesting or the expansion of care processes to unintended populations.[38]

Limitations

Several limitations of our work warrant mention. First, our systematic review was limited to clinical practice guidelines promulgated by the major emergency medicine specialty society in the United States and did not include the many clinical practice guidelines and recommendations for emergency care that may be contained within guidelines published by other specialty societies or in other countries. Second, the current design of ACEP Clinical Policies that published a summary evidence table precludes any assessment of the strength of evidence supporting individual recommendations—as such our conclusions reflect a broad assessment of current Clinical Policies as opposed to a detailed evaluation of individual clinical questions. Finally, our review abstracted recommendations and strength of designations as reported in each Clinical Policy and did not re-assess the strength of each recommendation or guideline using an independent scoring tool, such as AGREE II.[39] As such, our analysis better reflects current clinical practice guidelines as presented to readers, but detailed study of Clinical Policy component using established clinical practice guideline review tools may warrant future study.

Conclusions

Nearly half of current emergency medicine clinical practice guideline recommendations are based on expert opinion and low level evidence rather than clinical trial evidence. Despite a rapidly expanding body of published emergency care research, clinical practice guidelines increasingly contain consensus-based recommendations based on lower classes of evidence. These evidence gaps in clinical guidelines highlight priorities for the future emergency care research agenda. Policymakers should be aware of the low quality of evidence behind many guideline recommendations when developing future quality standards.

Supporting information

S1 Text. Current ACEP definitions of strength of recommendations.

https://doi.org/10.1371/journal.pone.0178456.s002

(DOCX)

S1 Table. ACEP literature classification schema.

https://doi.org/10.1371/journal.pone.0178456.s003

(DOCX)

Author Contributions

  1. Conceptualization: AKV DS JS.
  2. Data curation: DS KRB BS CR.
  3. Formal analysis: DS CR.
  4. Funding acquisition: AKV.
  5. Investigation: AKV DS BS KRB CR JS.
  6. Methodology: AV DS CR.
  7. Project administration: AV.
  8. Resources: AV.
  9. Software: AV DS CR.
  10. Supervision: AV JS.
  11. Validation: DS CR.
  12. Visualization: DS CR.
  13. Writing – original draft: AV DS.
  14. Writing – review & editing: AKV DS BS KRB CR JS.

References

  1. 1. Graham R, Mancher M, Miller Wolman D, Greenfield S, Steinberg E. Clinical Practice Guidelines We Can Trust. Washington (DC)2011.
  2. 2. Shekelle PG, Woolf SH, Eccles M, Grimshaw J. Developing clinical guidelines. West J Med. 1999;170(6):348–51. pmid:18751155; PubMed Central PMCID: PMC1305691.
  3. 3. Shekelle PG, Ortiz E, Rhodes S, Morton SC, Eccles MP, Grimshaw JM, et al. Validity of the Agency for Healthcare Research and Quality clinical practice guidelines: how quickly do guidelines become outdated? JAMA. 2001;286(12):1461–7. pmid:11572738.
  4. 4. Clinical Policy for Management of Adult Patients Presenting with a Chief Complaint of Chest Pain, with No History of Trauma: American College of Emergency Physicians; 1990.
  5. 5. Frumkin K. The chest pain policy. Ann Emerg Med. 1991;20(7):832–3. pmid:2064111.
  6. 6. Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC, Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009;301(8):831–41. pmid:19244190.
  7. 7. Wright JD, Pawar N, Gonzalez JS, Lewin SN, Burke WM, Simpson LL, et al. Scientific evidence underlying the American College of Obstetricians and Gynecologists' practice bulletins. Obstet Gynecol. 2011;118(3):505–12. pmid:21826038.
  8. 8. Atkins D, Eccles M, Flottorp S, Guyatt GH, Henry D, Hill S, et al. Systems for grading the quality of evidence and the strength of recommendations I: critical appraisal of existing approaches The GRADE Working Group. BMC Health Serv Res. 2004;4(1):38. pmid:15615589; PubMed Central PMCID: PMC545647.
  9. 9. Edlow JA, Panagos PD, Godwin SA, Thomas TL, Decker WW, American College of Emergency P. Clinical policy: critical issues in the evaluation and management of adult patients presenting to the emergency department with acute headache. Ann Emerg Med. 2008;52(4):407–36. pmid:18809105.
  10. 10. Napoli AM, Jagoda A. Clinical Policies: Their History, Future, Medical Legal Implications, and Growing Importance to Physicians. Journal of Emergency Medicine. 33(4):425–32. pmid:17976764
  11. 11. Schriger DL, Cantrill SV, Greene CS. The origins, benefits, harms, and implications of emergency medicine clinical policies. Ann Emerg Med. 1993;22(3):597–602. pmid:8442553.
  12. 12. Courtney DM, Neumar RW, Venkatesh AK, Kaji AH, Cairns CB, Lavonas E, et al. Unique characteristics of emergency care research: scope, populations, and infrastructure. Acad Emerg Med. 2009;16(10):990–4. pmid:19799578.
  13. 13. Brown J. National Institutes of Health Support for Clinical Emergency Care Research, 2011 to 2014. Annals of Emergency Medicine. 68(2):164–71. pmid:26973176
  14. 14. Clinical policy for the initial approach to adolescents and adults presenting to the emergency department with a chief complaint of headache. American College of Emergency Physicians. Ann Emerg Med. 1996;27(6):821–44. pmid:8644978.
  15. 15. American College of Emergency Physicians Clinical Policies Subcommittee on Thoracic Aortic D, Diercks DB, Promes SB, Schuur JD, Shah K, Valente JH, et al. Clinical policy: critical issues in the evaluation and management of adult patients with suspected acute nontraumatic thoracic aortic dissection. Ann Emerg Med. 2015;65(1):32–42 e12. pmid:25529153.
  16. 16. Practice parameter: neuroimaging in the emergency patient presenting with seizure (summary statement). American College of Emergency Physicians, American Academy of Neurology, American Association of Neurological Surgeons, American Society of Neuroradiology. Ann Emerg Med. 1996;28(1):114–8. pmid:8669731.
  17. 17. Huff JS, Melnick ER, Tomaszewski CA, Thiessen ME, Jagoda AS, Fesmire FM, et al. Clinical policy: critical issues in the evaluation and management of adult patients presenting to the emergency department with seizures. Ann Emerg Med. 2014;63(4):437–47 e15. pmid:24655445.
  18. 18. Committee ACP, Clinical Policies Subcommittee on S. Clinical policy: Critical issues in the evaluation and management of adult patients presenting to the emergency department with seizures. Ann Emerg Med. 2004;43(5):605–25. pmid:15111920.
  19. 19. Clinical policy for the initial approach to patients presenting with a chief complaint of seizure who are not in status epilepticus. American College of Emergency Physicians. Ann Emerg Med. 1997;29(5):706–24. pmid:9140263.
  20. 20. Clinical policy for the initial approach to patients presenting with a chief complaint of seizure, who are not in status epilepticus. American College of Emergency Physicians. Ann Emerg Med. 1993;22(5):875–83. pmid:8470849.
  21. 21. American College of Emergency Physicians Clinical Policies C, American College of Emergency Physicians Clinical Policies Subcommittee on Pediatric F. Clinical policy for children younger than three years presenting to the emergency department with fever. Ann Emerg Med. 2003;42(4):530–45. pmid:14520324.
  22. 22. Turner T, Misso M, Harris C, Green S. Development of evidence-based clinical practice guidelines (CPGs): comparing approaches. Implement Sci. 2008;3:45. pmid:18954465; PubMed Central PMCID: PMC2584093.
  23. 23. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts: Treatments for myocardial infarction. JAMA. 1992;268(2):240–8. pmid:1535110
  24. 24. Finfer S. Expert consensus: a flawed process for producing guidelines for the management of fluid therapy in the critically ill†. BJA: British Journal of Anaesthesia. 2014;113(5):735–7. pmid:24893783
  25. 25. Oxford Centre for Evidence-based Medicine–Levels of Evidence (March 2009): Centre for Evidence-Based Medicine; 2009 [cited 2017 February 10, 2017]. Available from: http://www.cebm.net/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/.
  26. 26. Io Medicine. Clinical Practice Guidelines We Can Trust. Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E, editors. Washington, DC: The National Academies Press; 2011. 290 p.
  27. 27. Lukens TW, Wolf SJ, Edlow JA, Shahabuddin S, Allen MH, Currier GW, et al. Clinical policy: critical issues in the diagnosis and management of the adult psychiatric patient in the emergency department. Ann Emerg Med. 2006;47(1):79–99. pmid:16387222.
  28. 28. Silvers SM, Howell JM, Kosowsky JM, Rokos IC, Jagoda AS, American College of Emergency P. Clinical policy: Critical issues in the evaluation and management of adult patients presenting to the emergency department with acute heart failure syndromes. Ann Emerg Med. 2007;49(5):627–69. pmid:17408803.
  29. 29. Druss BG, Marcus SC. Growth and decentralization of the medical literature: implications for evidence-based medicine. J Med Libr Assoc. 2005;93(4):499–501. pmid:16239948; PubMed Central PMCID: PMC1250328.
  30. 30. Jacobs AK, Anderson JL, Halperin JL. The Evolution and Future of ACC/AHA Clinical Practice Guidelines: A 30-Year JourneyA Report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Journal of the American College of Cardiology. 2014;64(13):1373–84. pmid:25103073
  31. 31. Lang ES, Wyer PC, Haynes RB. Knowledge Translation: Closing the Evidence-to-Practice Gap. Annals of Emergency Medicine. 2007;49(3):355–63. http://dx.doi.org/10.1016/j.annemergmed.2006.08.022. pmid:17084943
  32. 32. The NHS Atlas of Variation in Healthcare: Reducing unwarranted variation to increase value and improve quality. RightCare, 2015.
  33. 33. Guidance for Evaluating the Evidence Related to the Focus of Quality Measurement and Importance to Measure and Report National Quality Forum, 2011.
  34. 34. McGlynn EA. Selecting Common Measures of Quality and System Performance. Medical Care. 2003;41(1):I-39-I-47. pmid:00005650-200301001-00005.
  35. 35. CEDR—Clinical Emergency Data Registry 2014. Available from: http://www.acep.org/cedr.
  36. 36. Schuur JD, Brown MD, Cheung DS, Graff LIV, Griffey RT, Hamedani AG, et al. Assessment of Medicare's Imaging Efficiency Measure for Emergency Department Patients With Atraumatic Headache. Annals of Emergency Medicine. 60(3):280–90.e4. pmid:22364867
  37. 37. Blozik E, Nothacker M, Bunk T, Szecsenyi J, Ollenschläger G, Scherer M. Simultaneous development of guidelines and quality indicators–how do guideline groups act?: A worldwide survey. International Journal of Health Care Quality Assurance. 2012;25(8):712–29. pmid:23276064.
  38. 38. Saver BG, Martin SA, Adler RN, Candib LM, Deligiannidis KE, Golding J, et al. Care that Matters: Quality Measurement and Health Care. PLoS Med. 2015;12(11):e1001902. pmid:26574742; PubMed Central PMCID: PMC4648519.
  39. 39. Brouwers MC, Kho ME, Browman GP, Burgers JS, Cluzeau F, Feder G, et al. AGREE II: advancing guideline development, reporting and evaluation in health care. CMAJ. 2010;182(18):E839–42. pmid:20603348; PubMed Central PMCID: PMC3001530.