Skip to main content
Advertisement
  • Loading metrics

The Impact of eHealth on the Quality and Safety of Health Care: A Systematic Overview

  • Ashly D. Black,

    Affiliation eHealth Unit, Department of Primary Care and Public Health, Imperial College London, London, United Kingdom

  • Josip Car,

    Affiliation eHealth Unit, Department of Primary Care and Public Health, Imperial College London, London, United Kingdom

  • Claudia Pagliari,

    Affiliation eHealth Research Group, Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, United Kingdom

  • Chantelle Anandan,

    Affiliation eHealth Research Group, Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, United Kingdom

  • Kathrin Cresswell,

    Affiliation eHealth Research Group, Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, United Kingdom

  • Tomislav Bokun,

    Affiliation eHealth Unit, Department of Primary Care and Public Health, Imperial College London, London, United Kingdom

  • Brian McKinstry,

    Affiliation eHealth Research Group, Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, United Kingdom

  • Rob Procter,

    Affiliation National Centre for e-Social Science, University of Manchester, Manchester, United Kingdom

  • Azeem Majeed,

    Affiliation Department of Primary Care and Public Health, Imperial College London, London, United Kingdom

  • Aziz Sheikh

    aziz.sheikh@ed.ac.uk

    Affiliation eHealth Research Group, Centre for Population Health Sciences, The University of Edinburgh, Edinburgh, United Kingdom

Abstract

Background

There is considerable international interest in exploiting the potential of digital solutions to enhance the quality and safety of health care. Implementations of transformative eHealth technologies are underway globally, often at very considerable cost. In order to assess the impact of eHealth solutions on the quality and safety of health care, and to inform policy decisions on eHealth deployments, we undertook a systematic review of systematic reviews assessing the effectiveness and consequences of various eHealth technologies on the quality and safety of care.

Methods and Findings

We developed novel search strategies, conceptual maps of health care quality, safety, and eHealth interventions, and then systematically identified, scrutinised, and synthesised the systematic review literature. Major biomedical databases were searched to identify systematic reviews published between 1997 and 2010. Related theoretical, methodological, and technical material was also reviewed. We identified 53 systematic reviews that focused on assessing the impact of eHealth interventions on the quality and/or safety of health care and 55 supplementary systematic reviews providing relevant supportive information. This systematic review literature was found to be generally of substandard quality with regards to methodology, reporting, and utility. We thematically categorised eHealth technologies into three main areas: (1) storing, managing, and transmission of data; (2) clinical decision support; and (3) facilitating care from a distance. We found that despite support from policymakers, there was relatively little empirical evidence to substantiate many of the claims made in relation to these technologies. Whether the success of those relatively few solutions identified to improve quality and safety would continue if these were deployed beyond the contexts in which they were originally developed, has yet to be established. Importantly, best practice guidelines in effective development and deployment strategies are lacking.

Conclusions

There is a large gap between the postulated and empirically demonstrated benefits of eHealth technologies. In addition, there is a lack of robust research on the risks of implementing these technologies and their cost-effectiveness has yet to be demonstrated, despite being frequently promoted by policymakers and “techno-enthusiasts” as if this was a given. In the light of the paucity of evidence in relation to improvements in patient outcomes, as well as the lack of evidence on their cost-effectiveness, it is vital that future eHealth technologies are evaluated against a comprehensive set of measures, ideally throughout all stages of the technology's life cycle. Such evaluation should be characterised by careful attention to socio-technical factors to maximise the likelihood of successful implementation and adoption.

Please see later in the article for the Editors' Summary

Editors' Summary

Background

There is considerable international interest in exploiting the potential of digital health care solutions, often referred to as eHealth—the use of information and communication technologies—to enhance the quality and safety of health care. Often accompanied by large costs, any large-scale expenditure on eHealth—such as electronic health records, picture archiving and communication systems, ePrescribing, associated computerized provider order entry systems, and computerized decision support systems—has tended to be justified on the grounds that these are efficient and cost-effective means for improving health care. In 2005, the World Health Assembly passed an eHealth resolution (WHA 58.28) that acknowledged, “eHealth is the cost-effective and secure use of information and communications technologies in support of health and health-related fields, including health-care services, health surveillance, health literature, and health education, knowledge and research,” and urged member states to develop and implement eHealth technologies. Since then, implementing eHealth technologies has become a main priority for many countries. For example, England has invested at least £12.8 billion in a National Programme for Information Technology for the National Health Service, and the Obama administration in the United States has committed to a US$38 billion eHealth investment in health care.

Why Was This Study Done?

Despite the wide endorsement of and support for eHealth, the scientific basis of its benefits—which are repeatedly made and often uncritically accepted—remains to be firmly established. A robust evidence-based perspective on the advantages on eHealth could help to suggest priority areas that have the greatest potential for benefit to patients and also to inform international eHealth deliberations on costs. Therefore, in order to better inform the international community, the authors systematically reviewed the published systematic review literature on eHealth technologies and evaluated the impact of these technologies on the quality and safety of health care delivery.

What Did the Researchers Do and Find?

The researchers divided eHealth technologies into three main categories: (1) storing, managing, and transmission of data; (2) clinical decision support; and (3) facilitating care from a distance. Then, implementing methods based on those developed by the Cochrane Collaboration and the NHS Service Delivery and Organisation Programme, the researchers used detailed search strategies and maps of health care quality, safety, and eHealth interventions to identify relevant systematic reviews (and related theoretical, methodological, and technical material) published between 1997 and 2010. Using these techniques, the researchers retrieved a total of 46,349 references from which they identified 108 reviews. The 53 reviews that the researchers finally selected (and critically reviewed) provided the main evidence base for assessing the impact of eHealth technologies in the three categories selected.

In their systematic review of systematic reviews, the researchers included electronic health records and picture archiving communications systems in their evaluation of category 1, computerized provider (or physician) order entry and e-prescribing in category 2, and all clinical information systems that, when used in the context of eHealth technologies, integrate clinical and demographic patient information to support clinician decision making in category 3.

The researchers found that many of the clinical claims made about the most commonly used eHealth technologies were not substantiated by empirical evidence. The evidence base in support of eHealth technologies was weak and inconsistent and importantly, there was insubstantial evidence to support the cost-effectiveness of these technologies. For example, the researchers only found limited evidence that some of the many presumed benefits could be realized; importantly, they also found some evidence that introducing these new technologies may on occasions also generate new risks such as prescribers becoming over-reliant on clinical decision support for e-prescribing, or overestimate its functionality, resulting in decreased practitioner performance.

What Do These Findings Mean?

The researchers found that despite the wide support for eHealth technologies and the frequently made claims by policy makers when constructing business cases to raise funds for large-scale eHealth projects, there is as yet relatively little empirical evidence to substantiate many of the claims made about eHealth technologies. In addition, even for the eHealth technology tools that have proven to be successful, there is little evidence to show that such tools would continue to be successful beyond the contexts in which they were originally developed. Therefore, in light of the lack of evidence in relation to improvements in patient outcomes, as well as the lack of evidence on their cost-effectiveness, the authors say that future eHealth technologies should be evaluated against a comprehensive set of measures, ideally throughout all stages of the technology's life cycle, and include socio-technical factors to maximize the likelihood of successful implementation and adoption in a given context. Furthermore, it is equally important that eHealth projects that have already been commissioned are subject to rigorous, multidisciplinary, and independent evaluation.

Additional Information

Please access these websites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.1000387.

Introduction

Implementations of potentially transformative eHealth technologies are currently underway internationally, often with significant impact on national expenditure. England has, for example, invested at least £12.8 billion in a National Programme for Information Technology (NPfIT) for the National Health Service, and the Obama administration in the United States (US) has similarly committed to a US$38 billion eHealth investment in health care [1]. Such large-scale expenditure has been justified on the grounds that electronic health records (EHRs), picture archiving and communication systems (PACS), electronic prescribing (ePrescribing) and associated computerised provider (or physician) order entry systems (CPOE), and computerised decision support systems (CDSSs) will help address the problems of variable quality and safety in modern health care. However, the scientific basis of such claims—which are repeatedly made and seemingly uncritically accepted—remains to be established [2][7].

Moving this agenda forward thus requires a scientifically informed perspective. However, there remains a disparity between the evidence-based principles that underpin health care generally and the political, pragmatic, and commercial drivers of decision making in the commissioning of eHealth tools and services. Obtaining an evidence-informed perspective on the current situation may serve to ground unrealistic expectations that might hinder longer-term progress within the field, help to suggest priorities by identifying areas with greatest potential for benefit, and also inform ongoing deliberations on eHealth implementations that are being considered internationally.

To inform these global deliberations, we systematically reviewed the preexisting systematic review literature on eHealth technologies and their impact on the quality and safety of health care delivery. We synthesised and contextualised our findings with the broader theoretical and methodological literature with a view to producing a comprehensive and accessible overview of the field. We present here a synopsis and updated version of a much larger recently published report covering the period 1997–2010 [8].

Methods

Overview of Methods

Systematic reviews of reviews have been particularly advocated to inform policy, clinical, and research deliberations by providing an evidence-based summary of inter-related technologies [9]. Our approach involved drawing on established systematic review methodology (i.e., those developed by The Cochrane Collaboration) to ensure rigour by minimising the risk of bias [10]; we also drew on more novel methods of evidence synthesis (i.e., those developed by the UK National Health Service [NHS] Service Delivery and Organisation Programme) with the aim of producing an overview that we hoped would prove useful to decision makers [11]. We present here a summary of the methods used.

Developmental Work

Inherent difficulties associated with systematic reviews of health care organisation and delivery intervention include the considerable effort required at the outset to facilitate their conduct [9]. Accordingly, we began with an in-depth exploration of the fields of health care quality and safety, as well as eHealth functionalities used in health care delivery. This exploration entailed conceptually mapping the fields to understand various processes involved as well as how these relate to each other.

For quality and safety considerations, we identified existing taxonomies and frameworks to facilitate this conceptual mapping exercise, which helped to delineate the scope of our work. For the field of eHealth, we drew from existing team members' conceptual and empirical work to aid our construction of a conceptual map for eHealth technologies [12],[13]. This exercise allowed us to categorise interventions with regards to over-arching similarities. We characterised eHealth technologies as having three main overlapping functions: (1) to enable the storage, retrieval, and transmission of data; (2) to support clinical decision making; and (3) to facilitate remote care. Given the strategic focus of the English National Programme for Information Technology (NPfIT) (and other similar large-scale programmes) on electronic record and professional decision support systems [1], the first two functions were prioritised in this initial phase of our work. The current reported work thus concerns the related areas of EHRs, PACS, CPOEs, ePrescribing, and computerised systems for supporting clinical decision making. Remote care and consumer health informatics are the subjects of a subsequent 3-y research enquiry, which is currently in progress.

Search Strategy

We drew on established Cochrane-based systematic review principles to search for relevant systematic reviews. An inclusive string of MeSH and free terms (Text S1) was developed to query PubMed/MEDLINE, EMBASE, and the Cochrane Library contents for secondary research reports published from 1997 up to 2007 with no restrictions placed on language. The bibliographies of reports identified as potentially relevant were reviewed as was a catalogue of secondary research amassed through various contributions by team members. Additional searches of key health informatics resources, namely the conference proceedings and publication databases of the American Medical Informatics Association and the Agency of Healthcare Research and Quality, were also undertaken. Finally, the Internet was searched using the Google and Google Scholar search engines. Searches were periodically updated to ensure that the most recent publications were included with the last update occurring at the end of April 2010.

Selection and Critical Appraisal of Systematic Reviews

On the basis of the areas identified for prioritisation, we developed a detailed list of interventions that were to be included/excluded (Text S2). End users of applicable interventions were limited to health care professionals; any findings relating to patient-focused interventions were therefore excluded. Of interest were systematic reviews that focused on the assessment of patient, practitioner, or organisational outcomes. We detailed the following methodological criteria for the identification of systematic reviews: (1) reference to the study as being a systematic review by the authors within the title, abstract, or text; and/or (2) evidence from the description of the methods that systematic review principles had been utilised in searching and appraising the evidence.

All systematic reviews having been identified as potentially suitable were assessed for inclusion by two independent reviewers, with arbitration by a third reviewer if necessary. Data from systematic reviews meeting the above criteria, henceforth referred to as “reviews,” were independently critically reviewed by two reviewers, and relevant data were abstracted. Systematic reviews not primarily concerned with assessing impact on patients, professionals, or the organisation, but nonetheless intervention focused, were drawn on to provide additional contextual information. These supplementary systematic reviews (henceforth referred to as “supplementary reviews”) were not subjected to formal critical appraisal.

Critical appraisal was undertaken using an adapted version of the Critical Appraisal Skills Programme (CASP) tool for systematic reviews [14]. These modifications were informed by the growing literature regarding both the methodological and reporting issues with primary research in health informatics (Table S1). The details of this process and the tool's associated properties will be the subject of a separate publication in due course.

Data Synthesis

A standard approach was taken for each of the eHealth technologies of interest. Definitions were first clarified and then the individual use and broader scope for deployment conceptualised. Juxtaposing this with the aforementioned conceptual maps of the fields of eHealth, quality and safety provided a literature-based framework for delineating the principal theorised benefits and risks associated with each intervention. We used this framework to guide synthesis of the empirically demonstrated benefits and risks of implementing eHealth technologies.

The body of literature identified was too diverse to allow quantitative synthesis of empirical evidence and we therefore undertook a narrative synthesis. This synthesis involved initially describing the technologies and outcomes studies using the above-described framework for each of the included reviews, which was followed by developing a summary of our assessment of and the key findings from each review (Table S2). We then employed a modified version of the World Health Organization's Health Evidence Network system for appraising public health evidence, which classifies evidence into three main categories, i.e., strong, moderate or weak; this assessment being based on a combination of the overall consistency, quality, and volume of evidence uncovered. These review-derived data were then thematically synthesised in relation to each of the technologies under consideration, drawing on key findings from the additional reviews, as appropriate [8].

Results

Our searches retrieved a total of 46,349 references from which we selected a total of 108 reviews for inclusion (Figure 1). Our final selection of 53 reviews provided the main empirical evidence base in relation to assessing the impact of the selected eHealth technologies (see Table 1 for our critical appraisal of these studies) [15][67], full details of which can be found in Table S2. An additional 55 supplementary reviews provided context to the findings [68][122], aiding in their interpretation [123]. In the case of systematic review updates, only the most recent review in a series of updates was selected. In the case of full and summary publications, we drew on the more substantive reports. Three related reviews – an update, a fuller report, and its more concise counterpart – were an exception due to the complementary nature of the reports rather than these being duplicative [22],[55],[56].

thumbnail
Table 1. Critical appraisal of “reviews” (see legend for description of quality assessment criteria).

https://doi.org/10.1371/journal.pmed.1000387.t001

Data Storage, Management, and Retrieval Systems

Electronic health records.

The EHR is a complex construct encompassing digitised health care records and the information systems into which these are embedded [8]. Whilst there are a number of operational definitions, the US' Institute of Standards and Technology defines an EHR as “a longitudinal collection of patient-centric health care information available across providers, care settings, and time. It is a central component of an integrated health information system” [124]. EHRs can be used for the digital input, storage, display, retrieval, printing, and sharing of information contained in a patient's health record [8]. We found that these systems vary on multiple dimensions, including levels of sophistication, detail, data source, timeframe (single service encounter to complete health record), and extent of integration (across intra- and interservice boundaries). In addition to patient histories and details of recent care, these records may also incorporate digital images and scanned documents. More detailed EHRs further often include nonclinical data relevant to health care administration and/or planning such as, for example, bed management and commissioning data. EHRs can therefore be used by a variety of end users such as clinicians, administrators, and patients themselves. EHRs can also have varying degrees of added clinical functionality including the ability to interface with a digital PACS, enter orders electronically (i.e., CPOE), prescribing (ePrescribing), and access to CDSSs.

The theorised benefits and risks associated with EHRs are largely related to data storage and management functionality. These functions include increased accessibility, legibility, “searchability,” manipulation, transportation, sharing, and preservation of electronic data. Consequently, improved organisational efficiency and secondary uses of data are typically amongst the most commonly expected benefits. However, digitising health records can also introduce new risks. Paper persistence can result in threats to patient safety, unsecured networks can lead to illegitimate access, and increased time needed to document and retrieve patient data can result in organisational inefficiency. Moreover, the dynamic of the patient-provider interaction could become less personal with the intrusion by the computer as a “third person” in the consultation. If anticipated benefits are not realised, this may therefore mean that ultimately the EHR may be rendered cost-ineffective.

Although a number of reviews purporting to assess the impact of EHRs were found, many of these in fact investigated auxiliary systems such as CDSS, CPOE, and ePrescribing. As a result, most of the impacts assessed were more relevant to these other systems. We found only anecdotal evidence of the fundamental expected benefits and risks relating to the organisational efficiency resulting from the storage and management facilities within the EHR and thus the potential for secondary uses (Table 2). We did find, however, a small amount of secondary research relating to time efficiency for some health care professionals and administrators and data quality (in particular legibility, completeness, and comprehensiveness), which demonstrated weak evidence of benefit for both. Risks largely went ignored apart from anecdotal evidence of time-costs associated with recording of data due to both end-user skill and the inflexibility of structured data, increased costs of EHRs, and a decrease in patient-centeredness within the consultation (Table 3).

Picture archiving and communication systems.

PACS are clinical information systems used for the acquisition, archival, and post-processing distribution of digital images. An image must either be directly acquired using digital radiography or be digitised from a paper-based format. It can be stored using an electronic, magnetic, or optical storage device. PACS can be integrated or interface with EHRs and CDSSs, or be stand-alone systems.

Much like the digitisation of health records, certain benefits – i.e., accessibility, image (rather than data) quality, searchability, transportation, sharing, and preservation – can be expected from the digitisation of medical images, which were previously film based. Again, certain improvements to organisational efficiency should in theory follow on from this digitisation, including time-savings, continuity of care, and ability to remotely view images. Conversely, digitising medical images can lead to decreased organisational efficiency if increased time is needed for retrieval owing to the difficulties associated with navigating a new or cumbersome system or in the event of system downtime. If the potential benefits of a PACS implementation are not realised, high expenditure might render the application cost-inefficient.

Although only three reviews on PACS were located, in contrast to the reviews on EHRs the impacts assessed in reviews of PACS were more congruent with the theoretically derived benefits (Table 4). This assessment involved a focus on improved organisational efficiency through time savings resulting from increased productivity of radiology services, reduced transit time, and improved access to new, recently stored, and archived images, as well as reducing physical space requirements for images; there was also an interest in the assessments of costs relating to purchasing and processing film. Worth noting however was the transient negative impact of implementation as well as issues with access due to system “loss” and downtime; access was sometimes impeded by the new workflows, which could result in a decrease in opportunistic interactions between clinicians and radiologists (Table 5). Overall, despite some promising findings, the weak evidence for the beneficial impact of digitising medical images is largely due to a low volume of research and somewhat inconsistent findings across studies. For example, the overall cost-effectiveness of systems could not be determined, as the findings from economic analyses were often contradictory and of poor quality.

Supporting Clinical Decision Making

Computerised provider (or physician) order entry.

CPOE systems are typically used by clinicians to enter, modify, review, and communicate orders; and return results for laboratory tests, radiological images, and referrals (for pharmacy see ePrescribing) [8]. These systems can be integrated within EHRs and/or integrate or interface with CDSSs. They not only integrate orders (similar to EHRs) with patient data and PACS images, but they also have the explicit purpose of electronic transfer of orders and the return of results. The electronic request of orders and return of results is expected to result in organisational efficiency gains and time savings. However, potential risks of these systems include increased time spent on computer-related activity and increased infrastructure costs, thereby decreasing overall organisational efficiency.

We found relatively few reviews on CPOE that were not focused primarily on the ordering of medications, rather than the ordering of laboratory tests and medical images. Within the reviews, we found that what had been empirically evaluated generally mirrored the theorised impacts (Tables 6 and 7). The findings from these reviews indicated weak evidence of an impact on organisational efficiency. Individual efficiency and workload both increased and decreased between providers. Additionally, while the speed at which orders were received led to better preparation and a modest effect on time taken to process and deliver results, it did not affect when the patient or their specimen was made available or when their results were acted upon. Findings supported moderate evidence of an impact on practitioner performance. The provision of relevant information at the time of ordering had a moderate impact on increasing cost-conscious ordering and subsequently on decreasing those orders deemed inappropriate; and following system-generated suggestions led to increased ordering of routine care as well as withdrawal of potentially injurious care. There was however evidence that the use of CPOE had a negative impact on practitioners because of the increased time needed to complete orders by having to enter them into the computer system, or incompatibility between professional routines and those imposed by the new system. Changes in workflows also posed an opportunity cost for collaboration, and the potential exclusion of certain providers from processes. Additionally, workload could either decrease or increase as a result of changes in workflow, which when unaccounted for were dealt with on an ad hoc basis and allowed for the redesignation of responsibilities.

ePrescribing.

ePrescribing refers to clinical information systems that are used by clinicians to enter, modify, review, and output or communicate medication prescriptions. This term thus includes stand-alone CDSSs for prescribing purposes [8]. ePrescribing systems can integrate or interface with EHRs or be an element of a broader CPOE system. Like systems for computerised order entry, those for prescribing also have the explicit purpose of electronic transfer between the prescriber and the pharmacy and are rarely mentioned without decision support functionality [125]. ePrescribing systems should result in similar benefits as CPOE systems, including improvements in organisational efficiency and practitioner performance in relation to prescribing. Furthermore, the direct relationship between the therapeutic nature of prescribing of medications and patient outcomes suggests that better prescribing should lead to improved patient outcomes. Finally, as the prescribing of medications is a potentially larger contributor to risks to patient safety than the ordering of laboratory tests or radiology images, there is greater scope for improvements in patient safety by reducing errors in the prescribing process. On the contrary, a flawed or cumbersome system design (e.g., suboptimal specificity and/or sensitivity) and deployment strategies (e.g., insufficient training) may contribute to errors in prescribing and lead to workarounds, putting patients at risk and resulting in clinician dissatisfaction. Prescribers can also become over-reliant on decision support or overestimate its functionality, resulting in decreased practitioner performance.

ePrescribing was the most commonly studied intervention amongst the included reviews. Consequently, we found multiple papers covering most of the theorised impacts (Tables 8 and 9). Moderate evidence for improved organisational efficiency was indicated by the increased productivity of pharmacists, decreased turnaround time, and more accurate communication between prescribers and pharmacy. However, communications between pharmacists and prescribers, although standardised, were less information rich. Weak-to-moderate evidence was indicated for improved practitioner performance due in most part to increased ordering of corollary care, fewer medication errors, and by more optimal prescribing to some extent translating into improved surrogate patient outcomes. There was however far less evidence for improvements in patient level outcomes as even in the case of medication errors, it was unclear what proportion of these actually resulted in patient harm. There was evidence of disruptions in workflow, opportunity costs for collaboration, introduction of risks to patient safety due to “alert fatigue,” and suboptimal deployment strategies resulting from workarounds; there was also some evidence of erroneous assumptions regarding the availability of decision support functionality.

thumbnail
Table 8. Evidence of benefits associated with ePrescribing.

https://doi.org/10.1371/journal.pmed.1000387.t008

Computerised decision support systems.

CDSSs are, when used in the context of eHealth technologies, clinical information systems that integrate clinical and demographic patient information to provide support for decision making by clinicians [8]. These systems have highly variable levels of sophistication and configurability with regards to inputs (patient-specific data), knowledge bases, inference mechanisms (logic), and outputs. They issue certain alerts or prompts, which can take either an active (requiring the user to act on them) or passive (popping up without requiring the user to act on them) form. These decision support systems can be integrated or interface with other systems (such as those discussed above), or simply be stand alone.

In principle, the fundamental impact of CDSSs should be improved clinical decision making. This improvement should, in turn, lead to improved practitioner performance in a variety of care activities (e.g., provision of preventive care, diagnosis, disease management) and ways in which these care activities are delivered (e.g., more evidence-based or guideline adherent decisions). These systems should also be able to help address disparities in care by facilitating standardisation, especially when part of an EHR, PACS, CPOE, or ePrescribing system. Improved practitioner performance should result in a variety of beneficial impacts depending on the care activity targeted (e.g., increased immunisation rates, reduced resource utilisation, more timely diagnosis) or better disease control. In addition, if practitioner's performance is directly related to patient outcomes, then these too should improve. The main theorised risks relating to the use of CDSSs include a potential decline in practitioner performance due to deskilling or flawed system design, and related threats to patient safety.

Actual improved practitioner performance rather than just behaviour change in general was supported by only weak evidence (Tables 10 and 11). While most findings were able to demonstrate some degree of behaviour change it did not always translate into the provision of higher quality care. While some subgroups seemed to fare better than others, the evidence was still only modest at best. The most notable of findings were hallmarked by relative consistency across findings and thusly provided moderate evidence. These included increased provision of preventive care measures, disease-specific examinations or measurements, corollary orders to monitor side effects, and the decreased use of unnecessary or redundant care. Efforts at influencing practitioners to change practice patterns to adhere to a certain model of care were however less successful. No evidence was indicated for an impact on patient outcomes outside prescribing; while surrogate outcomes were modestly improved in some cases there was inconsistency across studies.

Discussion

Our systematic review of systematic reviews on the impact of eHealth has demonstrated that many of the clinical claims made about the most commonly deployed eHealth technologies cannot be substantiated by the empirical evidence. Overall, the evidence base in support of these technologies is weak and inconsistent, which highlights the need for more considered claims, particularly in relation to the patient-level benefits, associated with these technologies. Also of note is that we found virtually no evidence in support of the cost-effectiveness claims (Tables 211) that are frequently being made by policy makers when constructing business cases to raise funding for the large-scale eHealth deployments that are now taking place in many parts of the world [1].

This work is characterised by a number of strengths and limitations, which need to be considered when interpreting this work. Strengths include the multifaceted approach to the identification of systematic reviews and the synthesis of this body of evidence. Juxtaposing the conceptual maps of the fields of quality, safety, and eHealth permitted us to produce a comprehensive framework for assessing the impact of these technologies in an otherwise poorly ordered discipline. In addition, reflecting on methodological considerations and socio-technical factors enabled us to produce an overview that is sensitive to the intricacies of the discipline.

Given the poor indexing of this literature and the fact that our searches were centred on English-language databases, there is the possibility that we may have missed some systematic reviews. Our use of a novel, multimethod approach may be criticised as being less rigorous than a conventional systematic review in that we were not in a position to appraise individual primary studies. These more novel methods of synthesis are less well developed and employed, and therefore less evaluated [126]. The fact that we needed to adapt the instrument used for critical appraisal is another potential limitation. Further, our assumptions about the theoretical benefits expected presumes that the eHealth technologies considered are capable of delivering these and are used in a manner that allows them to do so. Likewise, it could be argued that some of the expected benefits outlined in this overview are assured and perhaps do not therefore require formal evaluation. It is our view, based on the prevailing climate surrounding EHRs and large-scale implementations underway globally, that the claims made about these technologies are subjected to critical review in the light of the empirical evidence. The overlap in reviews and inconsistent use of terminology required us to make judgment calls regarding what reviews, and indeed which included primary studies, pertained to which interventions. Our focus on clinician-orientated information systems being used in predominantly economically developed country settings are further limitations. More patient-oriented technologies such as telehealth care are no less important than those oriented towards professionals. We are currently engaged in follow-on work, which broadens our field of enquiry along these lines [127][131]. Finally, our synthesis was limited by critical deficits within the literature, which undermined our efforts to generate a fully reproducible quantitative summary of findings [132].

At the most elementary level, the literature that constitutes the evidence base is poorly referenced within bibliographic databases reflecting the nonstandard usage of terminology and lack of consensus on a taxonomy relating to eHealth technologies [133][135]. There were, furthermore, varying degrees of overlap between individual reviews and contradictory findings even amongst reviews of the same primary studies. In addition, we found considerable heterogeneity in the ways in which findings and other aspects relating to the fundamental features of reviews (motivation, objectives, methods, presentation of findings, etc.) from individual papers were presented. This imprecision and nonstandard usage of terminology, as well as the poor quality of reviews, posed additional challenges, both with respect to interpretation of findings from individual reviews and in relation to synthesising the overall body of evidence.

Our greatest cause for concern was the weakness of the evidence base itself. A strong evidence base is characterised by quantity, quality, and consistency. Unfortunately, we found that the eHealth evidence base falls short in all of these respects. In addition, relative to the number of eHealth implementations that have taken place, the number of evaluations is comparatively small. Apart from several barriers and challenges that impede the evaluation of eHealth interventions per se [136][141], a number of factors might contribute to evaluative findings going unpublished [142]. Conflict of interests can, in particular, make it difficult to publish negative findings [142], which means that the potential for publication bias should not be underestimated in this discipline [102],[143]. Moreover, published primary research has been repeatedly found to be of poor quality – particularly with regards to outcome measurement and analysis [73],[74],[80],[86],[118]. The highly heterogeneous and complex nature of these interventions makes consistency of findings, even across very similar scenarios, difficult to detect. Our critical appraisal exercise found the same to be true for secondary research. How the included reviews fared with regards to our critical appraisal, merits further comment and will be the subject of a further publication.

Another commonly criticised element of the existing evidence base is its utility [144]. Evaluations have to date largely favoured simplistic approaches, which have provided little insight into why a particular outcome has occurred [145]. Understanding the underlying mechanisms, typically by studying the particular context of the evaluation, is critical for drawing conclusions in relation to causal pathways and effectiveness of eHealth interventions [146]. In addition, evaluations have tended to focus on the benefits with little attention to the risks and costs, which are rarely assessed or rigorously appraised [73],[74],[80],[86],[118]. Consequently, the existing evidence base is often of little utility to decision making in relation to the strategic direction of implementation efforts [144].

A handful of high-profile primary studies demonstrating the greatest evidence of benefit often serve as exemplars of the transformative power of clinical information systems [22]. These often include advanced multifunctional clinical information systems incorporating storage, retrieval, management, decision support, order and results communication, and viewing functionality. Evidence of the beneficial impact of such systems is limited, however, to a few academic clinical centres of excellence where the systems were developed in house, undergoing extensive evaluation with continual improvement, supported by a strong sense of local ownership by their clinical users [31],[56]. The contrast between the success of these systems and the relative failure of much of the wider body of evidence is striking. Clearly, there are important lessons to be learned from these centres of excellence, but the extent to which the results of these primary studies can be generalised beyond their local environment to those institutions procuring “off-the-shelf” systems is questionable. It is encouraging, however, to see evaluations of commercial systems increasingly taking place [55]. A range of factors tend to contribute to the lack of successful implementations of these off-the-shelf systems. In particular, these commercial systems typically have assumptions about work practices embedded within them, which are often not easily transferable to different contexts of use. Additionally, it is not unusual for insufficient time and effort to be devoted to the all-important customisation process [147]. NHS Connecting for Health's difficulties with the implementation of EHRs into hospitals in England is a prime example of the challenges that can ensue if such socio-technical factors are given insufficient attention [148].

Keeping in mind the above, the maturation of evaluation is vital to the success of eHealth [149],[150]. There is some indication that the quality of evaluations is beginning to improve with regards to methodological rigour [74], but there is clearly still considerable scope for improvement [118]. Most of the reviews we included in our work made calls for more rigorous research to establish impact with some calling for more randomised controlled trials (RCTs) in particular [61],[151]. A growing number of authors have however argued for trials of eHealth interventions to employ guidance specifically for complex interventions [152]. However, there are a number of challenges to conducting RCTs of eHealth [153], and many calls have also been made for using other complementary methodologies [24],[146]. Strategies for improving the quality of research should include building the capacity and competency of researchers. In the shorter term, developing resources, tool-kits, frameworks, and the like for researchers and consumers of research should be prioritised [154][156]. Such developments are pivotal to furthering the science of evaluation in eHealth and the use of evidence-based principles in health informatics [157]. Another important development that is needed is the collaboration of different disciplines in evaluation [158],[159].

We found an important literature pertaining to the design and deployment aspects of eHealth technologies. This literature is central to understanding why some interventions succeed and others fail (or being judged as such). At the individual level, “human factors” play an important role in the design of an intervention, determining usability and ultimately adoption [160]. At the aggregate level, “organisational issues” are critical in strategising deployment that ultimately influences adoption [160]. Although both enablers and barriers to success are being elicited retrospectively from the literature for design, development, and deployment, the findings for both of these concepts, inter-related as they are, have largely gone untested prospectively. Although there is greater attention being paid to the socio-technical aspects in formal evaluations than ever before, there is still much that needs to be understood [161].

Conclusions

It is clear that there is now a large volume of work studying the impact of eHealth on the quality and safety of health care. This might be seen as setting a firm foundation for realising the potential benefits of eHealth. However, although seminal reports on quality and safety of health care invariably point to eHealth as one of the main vehicles for driving forwards sweeping improvements [2][7], our work indicates that realising these benefits is not guaranteed and if it is to be achieved, this will require substantial research resources and effort.

Our major finding from reviewing the literature is that empirical evidence for the beneficial impact of most eHealth technologies is often absent or, at best, only modest. While absence of evidence does not equate with evidence of ineffectiveness, reports of negative consequences indicate that evaluation of risks – anticipated or otherwise – is essential. Clinical informatics should be no less concerned with safety and efficacy than the pharmaceutical industry. Given this, there is a pressing need for further evaluations before substantial sums of money are committed to large-scale national deployments under the auspices of improving health care quality and/or safety.

Promising technologies, unless properly evaluated with results fed back into development, might not “mature” to the extent that is needed to realise their potential when deployed in everyday clinical settings. The paradox is that while the number of eHealth technologies in health care is growing, we still have insufficient understanding of how and why such interventions do or do not work [123]. To resolve this, it is essential to not only devote more effort to evaluation, but to ensure that the methodology adopted is multidisciplinary and thus capable of untangling the often complex web of factors that may influence the results. Moreover, a fuller description of the rationale for the choice of methodological approach employed to evaluate eHealth technologies in health care would facilitate synthesis and comparison.

Finally, it is equally important that deployments already commissioned are subject to rigorous, multidisciplinary, and independent evaluations. In particular, we should take every opportunity to learn from the largest eHealth commissioning and deployment project in health care in the world – the £12.8 billion NPfIT and the at least equally ambitious national programme that has recently begun in the US [162][166]. These and similar initiatives being pursued in other parts of the world offer an unparalleled opportunity not just for improving health care systems, but also for learning how to (or how not to) implement eHealth systems and for refining these further once introduced.

Supporting Information

Table S2.

Characteristics and main findings of “reviews.”

https://doi.org/10.1371/journal.pmed.1000387.s002

(0.42 MB DOC)

Text S1.

Search strategy (databases, string, and filters).

https://doi.org/10.1371/journal.pmed.1000387.s003

(0.05 MB DOC)

Text S2.

Intervention inclusion and exclusion criteria.

https://doi.org/10.1371/journal.pmed.1000387.s004

(0.03 MB DOC)

Acknowledgments

We are grateful to the Independent Project Steering Committee comprising Denis Protti (chair), David Bates, Richard Lilford, Maureen Baker, Antony Chuter, and Jo Foster for their valuable guidance and support. Our many thanks to Ulugbek Nurmatov for his work in quality assessment as well as to Ann Hansen for her work in running the searches. This work draws on a report published by the NHS Connecting for Health Evaluation programme, the full text of which is available from: http://www.pcpoh.bham.ac.uk/publichealth/cfhep/documents/NHS_CFHEP_001_Final_Report.pdf

Author Contributions

ICMJE criteria for authorship read and met: ADB JC CP CA KC TB BM RP AM AS. Agree with the manuscript's results and conclusions: ADB JC CP CA KC TB BM RP AM AS. Designed the experiments/the study: JC CP RP AM AS. Analyzed the data: ADB JC CA BM. Collected data/did experiments for the study: ADB JC CP CA TB. Wrote the first draft of the paper: ADB. Contributed to the writing of the paper: JC CP CA KC TB BM RP AM AS. Coapplicant on the grant that enabled this research to proceed; authored or coauthored key sections of the report (e.g., developed the conceptual maps, building on a previous analysis for NHS SDO): CP.

References

  1. 1. Catwell L, Sheikh A (2009) Evaluating eHealth interventions. PLoS Med 6: e1000126.
  2. 2. Department of Health, Chairman, (2000) An organisation with a memory: report of an expert group of learning from adverse events in the NHS. London: The Stationary Office.
  3. 3. Department of Health, Chief Pharmaceutical Officer (2004) Building a safer NHS for patients: improving medication safety. London: The Stationary Office.
  4. 4. Institute of Medicine (2000) To err is human: building a safer health system. Washington (D.C.): National Academy Press.
  5. 5. Institute of Medicine, Committee on Quality Health Care in America (2001) Crossing the quality chasm: a new health system for the 21st century. Washington (D.C.): National Academy Press.
  6. 6. Institute of Medicine (2003) Patient safety: achieving a new standard for care. Washington (D.C.): National Academy Press.
  7. 7. Institute of Medicine (2007) Preventing medication errors. Washington (D.C.): National Academy Press.
  8. 8. Car J, Black A, Anandan C, Cresswell K, Pagliari C, et al. (2008) The impact of eHealth on the quality and safety of healthcare. Available: http://www.haps.bham.ac.uk/publichealth/cfhep/001.shtml. Accessed 3 December 2010.
  9. 9. Bravata DM, McDonald KM, Shojania KG, Sundaram V, Owens DK (2005) Challenges in systematic reviews: synthesis of topics related to the delivery, organization, and financing of health care. Ann Intern Med 142: 1056–1065.
  10. 10. Higgins JPT, Green S (2009) Cochrane handbook for systematic reviews of interventions 5.0.2. The Cochrane Library. Available: http://www.cochrane-handbook.org. Accessed 3 December 2010.
  11. 11. Popay P, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M (2006) Guidance on the conduct of narrative synthesis in systematic reviews. A product from the ESRC Methods Programme. Lancaster: Institute of Health Research.
  12. 12. Pagliari C, Sloan D, Gregor P, Sullivan F, Kahan JP, et al. (2004) EH1 E-Health scoping exercise. Review of wider Web-based information sources. Dundee: University of Dundee.
  13. 13. Pagliari C, Sloan D, Gregor P, Sullivan F, Kahan JP, et al. (2004) EH1 E-Health scoping exercise. Review of the traditional research literature. Dundee: University of Dundee.
  14. 14. Critical Appraisal Skills Programme. Available: http://www.phru.nhs.uk/doc_links/s.reviews%20appraisal%20tool.pdf. Accessed 6 July 2010.
  15. 15. Ammenwerth E, Schnell-Inderst P, Machan C, Siebert U (2008) The effect of electronic prescribing on medication errors and adverse drug events: a systematic review. J Am Med Inform Assoc 15: 585–600.
  16. 16. Anderson D, Flynn K (1997) Picture archiving and communication systems: a systematic review of published studies of diagnostic accuracy, radiology work processes, outcomes of care, and cost. Available: http://www.research.va.gov/resources/pubs/docs/pacs.pdf Last accessed 3 December 2010.
  17. 17. Balas EA, Krishna S, Kretschmer RA, Cheek TR, Lobach DF, et al. (2004) Computerized knowledge management in diabetes care. Med Care 42: 610–621.
  18. 18. Bennett JW, Glasziou PP (2003) Computerised reminders and feedback in medication management: a systematic review of randomised controlled trials. Med J Aust 178: 217–222.
  19. 19. Bryan C, Boren SA (2008) The use and effectiveness of electronic clinical decision support tools in the ambulatory/primary care setting: a systematic review of the literature. Inform Prim Care 16: 79–91.
  20. 20. Charvet-Protat S, Thoral F (1998) Economic and organizational evaluation of an imaging network (PACS). J Radiol 79: 1453–1459.
  21. 21. Chatellier G, Colombet I, Degoulet P (1998) An overview of the effect of computer-assisted management of anticoagulant therapy on the quality of anticoagulation. Int J Med Inform 49: 311–320.
  22. 22. Chaudhry B, Wang J, Wu S, Maglione M, Mojica W, et al. (2006) Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med 144: 742–752.
  23. 23. Clamp S, Keen S (2005) The value of electronic health records. Leeds: Yorkshire Centre for Health Informatics, University of Leeds.
  24. 24. Delpierre C, Cuzin L, Fillaux J, Alvarez M, Massip P, et al. (2004) A systematic review of computer-based patient record systems and quality of care: more randomized clinical trials or a broader approach? Int J Qual Health Care 16: 407–416.
  25. 25. Dexheimer JW, Talbot TR, Sanders DL, Rosenbloom ST, Aronsky D (2008) Prompting clinicians about preventive care measures: a systematic review of randomized controlled trials. J Am Med Inform Assoc 15: 311–320.
  26. 26. Durieux P, Trinquart L, Colombet I, Nies J, Walton R, et al. (2008) Computerized advice on drug dosage to improve prescribing practice. Cochrane Database Syst Rev CD002894.
  27. 27. Eslami S, Abu-Hanna A, de Jonge E, de Keizer NF (2009) Tight glycemic control and computerized decision-support systems: a systematic review. Intensive Care Med 35: 1505–1517.
  28. 28. Eslami S, Abu-Hanna A, de Keizer NF (2007) Evaluation of outpatient computerized physician medication order entry systems: a systematic review. J Am Med Inform Assoc 14: 400–406.
  29. 29. Eslami S, Keizer NF, Abu-Hanna A (2008) The impact of computerized physician medication order entry in hospitalized patients-A systematic review. Int J Med Inform 77: 365–376.
  30. 30. Fitzmaurice DA, Hobbs FD, Delaney BC, Wilson S, McManus R (1998) Review of computerized decision support systems for oral anticoagulation management. Br J Haematol 102: 907–909.
  31. 31. Garg AX, Adhikari NK, McDonald H, Rosas-Arellano MP, Devereaux PJ, et al. (2005) Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA 293: 1223–1238.
  32. 32. Georgiou A, Williamson M, Westbrook JI, Ray S (2007) The impact of computerised physician order entry systems on pathology services: a systematic review. Int J Med Inform 76: 514–529.
  33. 33. Hayward GL, Parnes AJ, Simon SR (2009) Using health information technology to improve drug monitoring: a systematic review. Pharmacoepidemiol Drug Saf 18: 1232–1237.
  34. 34. Hender K (2000) How effective are computer assisted decision support systems (CADSS) in improving clinical outcomes of patients? Available from: cce@southernhealth.org.au. Accessed 3 December 2010.
  35. 35. Heselmans A, Van Der Meijden MJ, Donceel P, Aertgeerts B, Ramaekers D (2009) Effectiveness of electronic guideline-based implementation systems in ambulatory care settings - a systematic review. Implement Sci 4: 82.
  36. 36. Hider P (2002) Electronic prescribing: a critical appraisal of the literature. Available at: http://nzhta.chmeds.ac.nz/publications/elpresc.pdf Accessed 3 December 2010.
  37. 37. Irani JS, Middleton JL, Marfatia R, Omana ET, D'Amico F (2009) The use of electronic health records in the exam room and patient satisfaction: a systematic review. J Am Board Fam Med 22: 553–562.
  38. 38. Jamal A, McKenzie K, Clark M (2009) The impact of health information technology on the quality of medical and health care: a systematic review. HIM J 38: 26–37.
  39. 39. Jerant AF, Hill DB (2000) Does the use of electronic medical records improve surrogate patient outcomes in outpatient settings? J Fam Pract 49: 349–357.
  40. 40. Kaushal R, Shojania KG, Bates DW (2003) Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med 163: 1409–1416.
  41. 41. Mador RL, Shaw NT (2009) The impact of a Critical Care Information System (CCIS) on time spent charting and in direct patient care by staff in the ICU: a review of the literature. Int J Med Inform 78: 435–445.
  42. 42. Mitchell E, Sullivan F (2001) A descriptive feast but an evaluative famine: systematic review of published articles on primary care computing during 1980-97. BMJ 322: 279–282.
  43. 43. Montgomery AA, Fahey T (1998) A systematic review of the use of computers in the management of hypertension. J Epidemiol Community Health 52: 520–525.
  44. 44. Niazkhani Z, Pirnejad H, Berg M, Aarts J (2009) The impact of computerized provider order entry systems on inpatient clinical workflow: a literature review. J Am Med Inform Assoc 16: 539–549.
  45. 45. Oren E, Shaffer ER, Guglielmo BJ (2003) Impact of emerging technologies on medication errors and adverse drug events. Am J Health Syst Pharm 60: 1447–1458.
  46. 46. Pearson SA, Moxey A, Robertson J, Hains I, Williamson M, et al. (2009) Do computerised clinical decision support systems for prescribing change practice? A systematic review of the literature (1990-2007). BMC Health Serv Res 9: 154.
  47. 47. Poissant L, Pereira J, Tamblyn R, Kawasumi Y (2005) The impact of electronic health records on time efficiency of physicians and nurses: a systematic review. J Am Med Inform Assoc 12: 505–516.
  48. 48. Randell R, Mitchell N, Dowding D, Cullum N, Thompson C (2007) Effects of computerized decision support systems on nursing performance and patient outcomes: a systematic review. J Health Serv Res Policy 12: 242–249.
  49. 49. Reckmann MH, Westbrook JI, Koh Y, Lo C, Day RO (2009) Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J Am Med Inform Assoc 16: 613–623.
  50. 50. Rothschild J (2004) Computerized physician order entry in the critical care and general inpatient setting: a narrative review. J Crit Care 4: 271–278.
  51. 51. Schedlbauer A, Prasad V, Mulvaney C, Phansalkar S, Stanton W, et al. (2009) What evidence supports the use of computerized alerts and prompts to improve clinicians' prescribing behavior? J Am Med Inform Assoc 16: 531–538.
  52. 52. Shachak A, Reis S (2009) The impact of electronic medical records on patient-doctor communication during consultation: a narrative literature review. J Eval Clin Pract 15: 641–649.
  53. 53. Shamliyan TA, Duval S, Du J, Kane RL (2008) Just what the doctor ordered. Review of the evidence of the impact of computerized physician order entry system on medication errors. Health Serv Res 43: 32–53.
  54. 54. Shebl NA, Franklin BD, Barber N (2007) Clinical decision support systems and antibiotic use. Pharm World Sci 29: 342–349.
  55. 55. Shekelle PG, Goldzweig CL (2009) Costs and benefits of health technology information: an updated systematic review. Available: http://www.health.org.uk/public/cms/75/76/313/564/Costs%20and%20benefits%20of%20health%20information%20technology.pdf?realName=urByVX.pdf. Accessed 3 December 2010.
  56. 56. Shekelle PG, Morton SC, Keeler EB (2006) Costs and benefits of health information technology. Available: http://www.ahrq.gov/downloads/pub/evidence/pdf/hitsyscosts/hitsys.pdf. Accessed 3 December 2010.
  57. 57. Shiffman RN, Liaw Y, Brandt CA, Corb GJ (1999) Computer-based guideline implementation systems: a systematic review of functionality and effectiveness. J Am Med Inform Assoc 6: 104–114.
  58. 58. Shojania KG, Jennings A, Mayhew A, Ramsay CR, Eccles MP, Grimshaw J (2009) The effects of on-screen, point of care computer reminders on processes and outcomes of care. Cochrane Database Syst Rev CD001096.
  59. 59. Sintchenko V, Magrabi F, Tipper S (2007) Are we measuring the right end-points? Variables that affect the impact of computerised decision support on patient outcomes: a systematic review. Med Inform Internet Med 32: 225–240.
  60. 60. Smith MY, Depue JD, Rini C (2007) Computerized decision-support systems for chronic pain management in primary care. Pain Medicine S3: S155–S166.
  61. 61. Tan K, Dear PR, Newell SJ (2005) Clinical decision support systems for neonatal care. Cochrane Database Syst Rev CD004211.
  62. 62. Thompson D, Johnston P, Spurr C (2009) The impact of electronic medical records on nursing efficiency. J Nurs Adm 39: 444–451.
  63. 63. Uslu AM, Stausberg J (2008) Value of the electronic patient record: an analysis of the literature. J Biomed Inform 41: 675–682.
  64. 64. van Rosse F, Maat B, Rademaker CM, van Vught AJ, Egberts AC, et al. (2009) The effect of computerized physician order entry on medication prescription errors and clinical outcome in pediatric and intensive care: a systematic review. Pediatrics 123: 1184–1190.
  65. 65. Wolfstadt JI, Gurwitz JH, Field TS, Lee M, Kalkar S, et al. (2008) The effect of computerized physician order entry with clinical decision support on the rates of adverse drug events: a systematic review. J Gen Intern Med 23: 451–458.
  66. 66. Wong K, Yu SKh, Holbrook A (2010) A systematic review of medication safety outcomes related to drug interaction software. J Popul Ther Clin Pharmacol 17: e243–255.
  67. 67. Yourman L, Concato J, Agostini JV (2008) Use of computer decision support interventions to improve medication prescribing in older adults: a systematic review. Am J Geriatr Pharmacother 6: 119–129.
  68. 68. Alexander G, Staggers N (2009) A systematic review of the designs of clinical technology: findings and recommendations for future research. ANS Adv Nurs Sci 32: 252–279.
  69. 69. Berlin A, Sorani M, Sim I (2006) A taxonomic description of computer-based clinical decision support systems. J Biomed Inform 39: 656–667.
  70. 70. Campion TR Jr, Waitman LR, May AK, Ozdas A, Lorenzi NM, et al. (2010) Social, organizational, and contextual characteristics of clinical decision support systems for intensive insulin therapy: a literature review and case study. Int J Med Inform 79: 31–43.
  71. 71. Carvalho CJ, Borycki EM, Kushniruk A (2009) Ensuring the safety of health information systems: using heuristics for patient safety. Healthc Q 12: 49–54.
  72. 72. Chan KS, Fowles JB, Weiner JP (2010) Electronic health records and reliability and validity of quality measures: a review of the literature. Med Care Res Rev 67: 503–527.
  73. 73. Chuang JH, Hripcsak G, Jenders RA (2000) Considering clustering: a methodological review of clinical decision support system studies. Proc AMIA Symp 146–150.
  74. 74. de Keizer NF, Ammenwerth E (2008) The quality of evidence in health informatics: how did the quality of healthcare IT evaluation publications develop from 1982 to 2005? Int J Med Inform 77: 41–49.
  75. 75. Damiani G, Pinnarelli L, Colosimo SC, Almiento R, Sicuro L, et al. (2010) The effectiveness of computerized clinical guidelines in the process of care: a systematic review. BMC Health Serv Res 10: 1472–6963.
  76. 76. Dorr D, Bonner LM, Cohen AN, Shoai RS, Perrin R, et al. (2007) Informatics systems to promote improved care for chronic illness: a literature review. J Am Med Inform Assoc 14: 156–163.
  77. 77. Eisenstein EL, Ortiz M, Anstrom KJ, Crosslin DR, Lobach DF (2006) Assessing the quality of medical information technology economic evaluations: room for improvement. AMIA Annu Symp Proc 234–238.
  78. 78. Fitzpatrick LA, Melnikas AJ, Weathers M, Kachnowski SW (2008) Understanding communication capacity. Communication patterns and ICT usage in clinical settings. HIM J 22: 34–41.
  79. 79. Ford EW, Menachemi N, Peterson LT, Huerta TR (2009) Resistance is futile: but it is slowing the pace of EHR adoption nonetheless. J Am Med Inform Assoc 16: 274–281.
  80. 80. Friedman CP, Abbas UL (2003) Is medical informatics a mature science? A review of measurement practice in outcome studies of clinical systems. Int J Med Inform 69: 261–272.
  81. 81. Gagnon MP, Legare F, Labrecque M, Fremont P, Pluye P, et al. (2009) Interventions for promoting information and communication technologies adoption in healthcare professionals. Cochrane Database Syst Rev CD006093.
  82. 82. Greenhalgh T, Potts HW, Wong G, Bark P, Swinglehurst D (2009) Tensions and paradoxes in electronic patient record research: a systematic literature review using the meta-narrative method. Milbank Q 87: 729–788.
  83. 83. Gruber D, Cummings GG, LeBlanc L, Smith DL (2009) Factors influencing outcomes of clinical information systems implementation: a systematic review. Comput Inform Nurs 27: 151–163.
  84. 84. Gurses AP, Xiao Y (2006) A systematic review of the literature on multidisciplinary rounds to design information technology. J Am Med Inform Assoc 13: 267–276.
  85. 85. Handler SM, Altman RL, Perera S, Hanlon JT, Studenski SA, et al. (2007) A systematic review of the performance characteristics of clinical event monitor signals used to detect adverse drug events in the hospital setting. J Am Med Inform Assoc 14: 451–458.
  86. 86. Harris AD, McGregor JC, Perencevich EN, Furuno JP, Zhu J, et al. (2006) The use and interpretation of quasi-experimental studies in medical informatics. J Am Med Inform Assoc 13: 16–23.
  87. 87. Hart MD (2008) Informatics competency and development within the US nursing population workforce: a systematic literature review. Comput Inform Nurs 26: 320–329.
  88. 88. Hayrinen K, Saranto K, Nykanen P (2008) Definition, structure, content, use and impacts of electronic health records: a review of the research literature. Int J Med Inform 77: 291–304.
  89. 89. Hoerbst A, Ammenwerth E (2010) Electronic health records. A systematic review of quality requirements. Methods Inf Med 49: 320–336.
  90. 90. Hogan WR, Wagner MM (1997) Accuracy of data in computer-based patient records. J Am Med Inform Assoc 4: 342–355.
  91. 91. Holden RJ, Karsh BT (2010) The technology acceptance model: its past and its future in health care. J Biomed Inform 43: 159–172.
  92. 92. Huryk LA (2010) Factors influencing nurses' attitudes towards healthcare information technology. J Nurs Manag 18: 606–612.
  93. 93. Johnson KB (2001) Barriers that impede the adoption of pediatric information technology. Arch Pediatr Adolesc Med 155: 1374–1379.
  94. 94. Jordan K, Porcheret M, Croft P (2004) Quality of morbidity coding in general practice computerized medical records: a systematic review. Fam Pract 21: 396–412.
  95. 95. Kaplan B (2001) Evaluating informatics applications–clinical decision support systems literature review. Int J Med Inform 64: 15–37.
  96. 96. Kawamoto K, Houlihan CA, Balas EA, Lobach DF (2005) Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ 330: 765.
  97. 97. Kawamoto K, Lobach DF (2003) Clinical decision support provided within physician order entry systems: a systematic review of features effective for changing clinician behavior. AMIA Annu Symp Proc 361–365.
  98. 98. Keshavjee K, Bosomworth J, Copen J, Lai J, Kucukyazici B, et al. (2006) Best practices in EMR implementation: a systematic review. AMIA Annu Symp Proc 982.
  99. 99. Khajouei R, Jaspers MW (2008) CPOE system design aspects and their qualitative effect on usability. Stud Health Technol Inform 136: 309–314.
  100. 100. Kukafka R, Johnson SB, Linfante A, Allegrante JP (2003) Grounding a new information technology implementation framework in behavioral science: a systematic analysis of the literature on IT use. J Biomed Inform 36: 218–227.
  101. 101. Ludwick DA, Doucette J (2009) Adopting electronic medical records in primary care: lessons learned from health information systems implementation experience in seven countries. Int J Med Inform 78: 22–31.
  102. 102. Machan C, Ammenwerth E, Bodner T (2006) Publication bias in medical informatics evaluation research: is it an issue or not? Stud Health Technol Inform 124: 957–962.
  103. 103. Mair FS, May C, Finch T, Murray E, Anderson G, et al. (2007) Understanding the implementation and integration of e-health services. J Telemed Telecare 13: 36–37.
  104. 104. Mollon B, Chong J Jr, Holbrook AM, Sung M, Thabane L, et al. (2009) Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials. BMC Med Inform Decis Mak 9: 11.
  105. 105. Moxey A, Robertson J, Newby D, Hains I, Williamson M, et al. (2010) Computerized clinical decision support for prescribing: provision does not guarantee uptake. J Am Med Inform Assoc 17: 25–33.
  106. 106. Nies J, Colombet I, Degoulet P, Durieux P (2006) Determinants of success for computerized clinical decision support systems integrated in CPOE Systems: a systematic review. AMIA Annu Symp Proc 594–598.
  107. 107. Pirnejad H, Niazkhani Z, Berg M, Bal R (2008) Intra-organizational communication in healthcare–considerations for standardization and ICT application. Methods Inf Med 47: 336–345.
  108. 108. Poe SS (2010) Building nursing intellectual capital for safe use of information technology: a systematic review. J Nurs Care Qual. In press.
  109. 109. Rahimi B, Vimarlund V (2007) Methods to evaluate health information systems in healthcare settings: a literature review. J Med Syst 31: 397–432.
  110. 110. Rahimi B, Vimarlund V, Timpka T (2009) Health information system implementation: a qualitative meta-analysis. J Med Syst 33: 359–368.
  111. 111. Roth CP, Lim YW, Pevnick JM, Asch SM, McGlynn EA (2009) The challenge of measuring quality of care from the electronic health record. Am J Med Qual 24: 385–394.
  112. 112. Saboor S, Ammenwerth E (2009) Categorizing communication errors in integrated hospital information systems. Methods Inf Med 48: 203–210.
  113. 113. Thiru K, Hassey A, Sullivan F (2003) Systematic review of scope and quality of electronic patient record data in primary care. BMJ 326: 1070.
  114. 114. van de Wetering R, Batenburg R (2009) A PACS maturity model: a systematic meta-analytic review on maturation and evolvability of PACS in the hospital enterprise. Int J Med Inform 78: 127–140.
  115. 115. van der Meijden MJ, Tange HJ, Troost J, Hasman A (2003) Determinants of success of inpatient clinical information systems: a literature review. J Am Med Inform Assoc 10: 235–243.
  116. 116. van der Sijs H, Aarts J, Vulto A, Berg M (2006) Overriding of drug safety alerts in computerized physician order entry. J Am Med Inform Assoc 13: 138–147.
  117. 117. Ward R, Stevens C, Brentnall P, Briddon J (2008) The attitudes of health care staff to information technology: a comprehensive review of the research literature. Health Info Libr J 25: 81–97.
  118. 118. Weir CR, Staggers N, Phansalkar S (2009) The state of the evidence for computerized provider order entry: a systematic review and analysis of the quality of the literature. Int J Med Inform 78: 365–374.
  119. 119. Wen HC, Ho YS, Jian WS, Li HC, Hsu YH (2007) Scientific production of electronic health record research, 1991-2005. Comput Methods Programs Biomed 86: 191–196.
  120. 120. Wollersheim D, Sari A, Rahayu W (2009) Archetype-based electronic health records: a literature review and evaluation of their applicability to health data interoperability and access. HIM J 38: 7–17.
  121. 121. Yarbrough AK, Smith TB (2007) Technology acceptance among physicians: a new take on TAM. Med Care Res Rev 64: 650–672.
  122. 122. Yusof MM, Stergioulas L, Zugic J (2007) Health information systems adoption: findings from a systematic review. Medinfo 12: 262–266.
  123. 123. Shepperd S, Lewin S, Straus S, Clarke M, Eccles MP, et al. (2009) Can we systematically review studies that evaluate complex interventions? PLoS Med 6: e1000086.
  124. 124. US Institute of Standards & Technology. Available: http://www.itl.nist.gov/div897/docs/EHR.htm Accessed 3 December 2010.
  125. 125. eHealth Initiative (2004) Electronic prescribing: towards maximum value and rapid adoption. Available: http://www.ehealthinitiative.org/uploads/file/eRx%202004%20Report%20Exec%20Summary.pdf Accessed 3 December 2010.
  126. 126. Dixon-Woods M, Agarwal S, Jones D, Young B, Sutton A (2005) Synthesising qualitative and quantitative evidence: a review of possible methods. J Health Serv Res Policy 10: 45–53.
  127. 127. McKinstry B, Pinnock H, Sheikh A (2009) Telemedicine for management of patients with COPD? Lancet 374: 672–673.
  128. 128. McKinstry B, Hammersley V, Burton C, Pinnock H, Elton R, et al. (2010) The quality, safety and content of telephone and face-to-face consultations: a comparative study. Qual Saf Health Care 19: 298–303.
  129. 129. Pinnock H, Hanley J, Lewis S, MacNee W, Pagliari C, et al. (2009) The impact of a telemetric chronic obstructive pulmonary disease monitoring service: randomised controlled trial with economic evaluation and nested qualitative study. Prim Care Respir J 18: 233–235.
  130. 130. McKinstry B, Watson P, Pinnock H, Heaney D, Sheikh A (2009) Telephone consulting in primary care: a triangulated qualitative study of patients and providers. Br J Gen Pract 59: e209–218.
  131. 131. McKinstry B, Watson P, Pinnock H, Heaney D, Sheikh A (2009) Confidentiality and the telephone in family practice: a qualitative study of the views of patients, clinicians and administrative staff. Fam Pract 26: 344–350.
  132. 132. Black A, Car J, Majeed A, Sheikh A (2008) Strategic considerations for improving the quality of eHealth research: we need to improve the quality and capacity of academia to undertake informatics research. Inform Prim Care 16: 175–177.
  133. 133. Dixon BE, Zafar A, McGowan JJ (2007) Development of a taxonomy for health information technology. Stud Health Technol Inform 129: 616–620.
  134. 134. Pagliari C, Sloan D, Gregor P, Sullivan F, Detmer D, et al. (2005) What is eHealth (4): a scoping exercise to map the field. J Med Internet Res 7: e1.
  135. 135. Oh H, Rizo C, Enkin M, Jadad A (2005) What is eHealth (3): a systematic review of published definitions. J Med Internet Res 7: e1.
  136. 136. Ahern DK (2007) Challenges and opportunities of eHealth research. Am J Prev Med 32: S75–S82.
  137. 137. Bowling MJ, Rimer BK, Lyons EJ, Golin CE, Frydman G, et al. (2006) Methodologic challenges of e-health research. Eval Program Plann 29: 390–396.
  138. 138. Heathfield H, Pitty D, Hanka R (1998) Evaluating information technology in health care: barriers and challenges. BMJ 316: 1959–1961.
  139. 139. Friedman CF, Haug P (2003) Report on conference track 5: evaluation metrics and outcome. Int J Med Inf 69: 307–309.
  140. 140. Ammenwerth E, de Keizer N (2007) A viewpoint on evidence-based health informatics, based on a pilot survey on evaluation studies in health care informatics. J Am Med Inform Assoc 14: 368–371.
  141. 141. Ammenwerth E, Schnell-Inderst P, Siebert U (2010) Vision and challenges of Evidence-Based Health Informatics: a case study of a CPOE meta-analysis. Int J Med Inform 79: e83–e88.
  142. 142. Ammenwerth E, Graber S, Herrmann G, Burkle T, Konig J (2003) Evaluation of health information systems-problems and challenges. Int J Med Inform 71: 125–135.
  143. 143. Friedman CP, Wyatt JC (2001) Publication bias in medical informatics. J Am Med Inform Assoc 8: 189–191.
  144. 144. Clamp S, Keen J (2007) Electronic health records: is the evidence base any use? Med Inform Internet Med 32: 5–10.
  145. 145. Kaplan B (2001) Evaluating informatics applications–some alternative approaches: theory, social interactionism, and call for methodological pluralism. Int J Med Inform 64: 39–56.
  146. 146. Ammenwerth E, Talmon J, Ash JS, Bates DW, Beuscart-Zephir MC, et al. (2006) Impact of CPOE on mortality rates–contradictory findings, important messages. Methods Inf Med 45: 586–593.
  147. 147. Pollock N, Williams R, Procter R (2003) Fitting standard software packages to non-standard organizations: the “biography' of an enterprise-wide system. Tech Anal Strat Manag 15: 317.
  148. 148. Robertson AR, Cresswell K, Takian A, Petrakaki D, et al. (2010) Prospective evaluation of the implementation and adoption of NHS Connecting for Health's electronic health record in secondary care in England: interim results from a national evaluation. BMJ 341: c4564.
  149. 149. Ammenwerth E, Brender J, Nykanen P, Prokosch HU, Rigby M, et al. (2004) Visions and strategies to improve evaluation of health information systems. Reflections and lessons based on the HIS-EVAL workshop in Innsbruck. Int J Med Inform 73: 479–491.
  150. 150. Ammenwerth E, Aarts J, Berghold A, Beuscart-Zephir MC, Brender J, et al. (2006) Declaration of Innsbruck. Results from the European Science Foundation Sponsored Workshop on Systematic Evaluation of Health Information Systems (HIS-EVAL). Methods Inf Med 45: Suppl 1121–123.
  151. 151. Tierney WM, Overhage JM, McDonald CJ (1994) A plea for controlled trials in medical informatics. J Am Med Inform Assoc 1: 353–355.
  152. 152. Holbrook AM, Thabane L, Shcherbatykh IY, O'Reilly D (2006) E-health interventions as complex interventions: improving the quality of methods of assessment. AMIA Annu Symp Proc 952.
  153. 153. Shcherbatykh I, Holbrook A, Thabane L, Dolovich L (2008) Methodologic issues in health informatics trials: the complexities of complex interventions. J Am Med Inform Assoc 15: 575–580.
  154. 154. Chuang JH, Hripcsak G, Heitjan DF (2002) Design and analysis of controlled trials in naturally clustered environments: implications for medical informatics. J Am Med Inform Assoc 9: 230–238.
  155. 155. Talmon J, Ammenwerth E, Brender J, de Keizer N, Nykänen P, et al. (2010) Statement on reporting of evaluation studies in health informatics STARE-HI. Available: http://www.imia.org/endorsed/Stare-HI_as_published.pdf Accessed 3 December 2010.
  156. 156. Ammenwerth E, Brender J, Nykanen P, Talmon J, Wessel C (2010) Guidelines for best evaluation practices in health informatics GEP-HI. Available: http://iig.umit.at/efmi/Accessed 3 December 2010.
  157. 157. Wyatt J (2010) Assessing and improving evidence based health informatics research. Stud Health Technol Inform 151: 435–445.
  158. 158. Pagliari C (2007) Design and evaluation in eHealth: challenges and implications for an interdisciplinary field. J Med Internet Res 9: e15.
  159. 159. Brender J (2006) Overview of assessment methods. Handbook of evaluation methods for health informatics (first edition). Burlington: Academic Press. pp. 61–72.
  160. 160. Greenhalgh T, Robert G, Bate P, Kyriakidou O, Macfarlane F, et al. (2004) How to spread good ideas: a systematic review of the literature on diffusion, dissemination and sustainability of innovations in health service delivery and organisation. Available: http://www.sdo.nihr.ac.uk/files/project/38-final-report.pdf. Accessed 3 December 2010.
  161. 161. Goldzweig CL, Towfigh A, Maglione M, Shekelle PG (2009) Costs and benefits of health information technology: new trends from the literature. Health Aff 28: w282–w293.
  162. 162. Collin S, Reeves BC, Hendy J, Fulop N, Hutchings A, et al. (2008) Implementation of computerised physician order entry (CPOE) and picture archiving and communication systems (PACS) in the NHS: quantitative before and after study. BMJ 337: a939.
  163. 163. Greenhalgh T, Stramer K, Bratan T, Byrne E, Mohammad Y, et al. (2008) Introduction of shared electronic records: multi-site case study using diffusion of innovation theory. BMJ 337: a1786.
  164. 164. Eminovic N, Wyatt JC, Tarpey AM, Murray G, Ingrams GJ (2004) First evaluation of the NHS direct online clinical enquiry service: a nurse-led web chat triage service for the public. J Med Internet Res 6: e17.
  165. 165. Hendy J, Reeves BC, Fulop N, Hutchings A, Masseria C (2005) Challenges to implementing the national programme for information technology (NPfIT): a qualitative study. BMJ 331: 331–336.
  166. 166. Hendy J, Fulop N, Reeves BC, Hutchings A, Collin S (2007) Implementing the NHS information technology programme: qualitative study of progress in acute trusts. BMJ 334: 1360.