Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Timeliness of notification systems for infectious diseases: A systematic literature review

  • Corien Swaan ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

    Corien.swaan@rivm.nl

    Affiliation Centre for Infectious Disease Control, National Institute for Public Health and the Environment (RIVM), Bilthoven, the Netherlands

  • Anouk van den Broek,

    Roles Investigation, Resources

    Current address: Medicines Evaluation Board Netherlands, Utrecht, the Netherlands

    Affiliation Centre for Infectious Disease Control, National Institute for Public Health and the Environment (RIVM), Bilthoven, the Netherlands

  • Mirjam Kretzschmar,

    Roles Conceptualization, Writing – review & editing

    Affiliations Centre for Infectious Disease Control, National Institute for Public Health and the Environment (RIVM), Bilthoven, the Netherlands, University Medical Centre Utrecht, Utrecht, the Netherlands

  • Jan Hendrik Richardus

    Roles Conceptualization, Supervision, Writing – review & editing

    Affiliation Department of Public Health, Erasmus MC, University Medical Center Rotterdam, Rotterdam, the Netherlands

Abstract

Introduction

Timely notification of infectious diseases is crucial for prompt response by public health services. Adequate notification systems facilitate timely notification. A systematic literature review was performed to assess outcomes of studies on notification timeliness and to determine which aspects of notification systems are associated with timely notification.

Methodology

Articles reviewing timeliness of notifications published between 2000 and 2017 were searched in Pubmed and Scopus. Using a standardized notification chain, timeliness of reporting system for each article was defined as either sufficient (≥ 80% notifications in time), partly sufficient (≥ 50–80%), or insufficient (< 50%) according to the article’s predefined timeframe, a standardized timeframe for all articles, and a disease specific timeframe. Electronic notification systems were compared with conventional methods (postal mail, fax, telephone, email) and mobile phone reporting.

Results

48 articles were identified. In almost one third of the studies with a predefined timeframe (39), timeliness of notification systems was either sufficient or insufficient (11/39, 28% and 12/39, 31% resp.). Applying the standardized timeframe (45 studies) revealed similar outcomes (13/45, 29%, sufficient notification timeframe, vs 15/45, 33%, insufficient). The disease specific timeframe was not met by any study. Systems involving reporting by laboratories most often complied sufficiently with predefined or standardized timeframes. Outcomes were not related to electronic, conventional notification systems or mobile phone reporting. Electronic systems were faster in comparative studies (10/13); this hardly resulted in sufficient timeliness, neither according to predefined nor to standardized timeframes.

Conclusion

A minority of notification systems meets either predefined, standardized or disease specific timeframes. Systems including laboratory reporting are associated with timely notification. Electronic systems reduce reporting delay, but implementation needs considerable effort to comply with notification timeframes. During outbreak threats, patient, doctors and laboratory testing delays need to be reduced to achieve timely detection and notification. Public health authorities should incorporate procedures for this in their preparedness plans.

Introduction

Monitoring infectious diseases is essential for detecting outbreaks that demand public health response and control measures. Therefore, efficient and reliable surveillance and notification systems are vital for monitoring public health trends and early detection of disease outbreaks [1]. Timeliness is an important indicator for evaluation of surveillance systems, and defined as ‘reflecting the speed between steps in a public health surveillance system’ [2].

Public health response relies amongst others on notification of infectious diseases; a notifiable disease is a disease that is reportable either by law or by regulation [3]. Notification is the result of a chain of events from infection until report at the public health services, either local, regional or national [4]. Fig 1 illustrates the reporting timeline of infectious diseases. Delays in this chain are disease specific and the result of 1) patient delay, i.e. time elapsed from onset of disease until consultation of a physician (DOC), 2) doctors delay, time elapsed between consultation and ordering a laboratory confirmation test (DCL), and 3) laboratory delay, i.e. time elapsed until confirmation test result, depending on duration and frequency of testing (DLX). Lastly, there is a notification delay, from either laboratory or physician to the local health department (D3X and D3P, respectively), and reporting delay to regional and/or national health institutes (D4, D5 respectively). Most countries have installed legal obligations for physicians and diagnosing laboratories to notify certain infectious diseases to public health authorities according to a designated timeframe to ensure timely response, and in order to comply with international regulations [5, 6].

thumbnail
Fig 1. Notification timeline.

D: delay; T: time point; D1: delay between onset of disease and notification at local health department (LHD); D2: delay between ordering a laboratory confirmation test and notification at LHD; D3X and D3P: delays between laboratory conformation test result and notification at the LHD by the laboratory and by the physician respectively; D4: delays between notification at LHD and reporting at regional health department (RHD); D5: delay between reporting at RHD and the national health department (NHD). Arrows: delays used in this article.

https://doi.org/10.1371/journal.pone.0198845.g001

Notification systems traditionally involved conventional methods using postal mail, telephone, fax and/or electronic mail. Over the last two decades, electronic software systems for laboratory test recording and patient file records facilitated the development of electronic reporting systems, as electronic laboratory reporting (ELR), and automated ELR. [7] These electronic software systems have improved timeless of notification to public health services, both on local level as regional or national level [813]. Nowadays, inter-operable, interconnected, electronic real-time reporting systems have become the standard, and included as indicator for real time surveillance in the 2016 Joint External Evaluation (JEE) Tool of the WHO [14]. These systems however, are costly and evaluations of the surveillance systems reveal that also electronic reporting systems do not always meet the designated (‘predefined’) notification timeframes [15, 16].

There is a lack of international reviews on which factors related to notification systems influence timeliness of reporting of infectious diseases. In this study, a systematic review of peer-reviewed literature was performed to assess timeliness of notification systems. In order to determine factors associated with timely notification, we compared timeliness of notification systems in three ways: firstly using the ‘predefined timeframe’, i.e. the timeliness criteria designated by the study itself, secondly using a ‘standardized timeframe’, i.e. identical timeliness criteria for all studies designated for this review, and thirdly using ‘disease specific timeframes’, i.e. timeliness criteria differentiated between specific diseases.

Methodology

Search strategy and selection criteria

A systematic review was conducted using the PRISMA framework (Preferred Reporting Items for Systematic Reviews and Meta-analyses). Articles reviewing timeliness of infectious disease notification systems, published during January 1st 2000—January 1st 2017 were included. Earlier articles were excluded to avoid information on outdated notification systems. A detailed search strategy in biomedical and public health literature was conducted in two electronic databases, Pubmed and Scopus, using a combination of free-text search terms and medical subject headings. The search included terms related to infectious disease reporting (‘disease notification’, ‘notification system’, ‘infectious disease reporting’, ‘exposure notification’, ‘communicable disease control’) and reporting timeliness (‘reporting time’, ‘notification time’, ‘reporting delay’, ‘time factor’). The date last searched was January 30th 2017. The full electronic search strategy for Pubmed is depicted in Fig 2.

The identified articles from each literature search were reviewed on title and abstract. Studies published in English, during the period 2000–2017, and concerning human infectious diseases (in general or disease specific) were included. Excluded were studies that only described notification completeness or only described the timeliness between symptom onset and diagnosis, or focused on notification compliance of healthcare professionals, or described timeliness of reporting from national to international health organizations. In addition, studies describing a surveillance algorithm, and studies which did not provide information about the designated criteria for timeliness of notification, i.e. the ‘predefined timeframe’, were excluded. Studies without predefined timeframe, but comparing timeliness of different notification systems were included. Systematic reviews were excluded, however their conclusions are reflected upon in the discussion.

One researcher (AB) reviewed all titles and abstracts. In case of doubt about inclusions or exclusion, another researcher (CS) was consulted and through discussion a decision was taken.

Subsequently, references were imported into the bibliographic database Endnote library, where duplicates were identified and removed. The remaining articles were reviewed in full text to determine their inclusion for data extraction. Reference lists of the included articles and reviews were searched for additional literature.

Data extraction

Information extracted included the country or region of the study setting, year of publication, infectious disease(s), general or disease specific reporting system, study design (comparison study where two or more reporting methodologies were compared, or evaluation study when one system was evaluated), level of reporting and methodology of reporting, legislation (mandatory or voluntary reporting), reporting delay studied, predefined timeframe for reporting and the outcomes of the reporting delay(s). The following categorizations were made:

  1. Level(s) of reporting:
    1. - level 1 (L1): physician and/or laboratory to local public health department (LHD);
    2. - level 2 (L2): LHD to regional health department (RHD);
    3. - level 3 (L3): RHD to national health authority (NHA).
  2. Method of reporting:
    1. - conventional reporting (postal mail, fax, telephone or e- mail);
    2. - electronic reporting (including web-based reporting systems, as electronic laboratory reporting (ELR), electronic automated laboratory reporting (EALR).
    3. - mobile phone reporting (using shore message services with mobile telephones)
  3. Reporting delay (see Fig 1):
    1. - D1: delay between onset of disease and notification at local health department (LHD);
    2. - D2: delay between ordering a laboratory confirmation test and notification at LHD;
    3. - D3X and D3P: delays between laboratory conformation test result and notification at the LHD by the laboratory and by the physician respectively; in case the study did not differentiate between reporting either through laboratory or physician, the delay was defined as D3P/X;
    4. - D4: delays between notification at LHD and reporting at regional health department (RHD);
    5. - D5: delay between reporting at RHD and the national health department (NHD).

For each selected study, one researcher extracted the relevant data.

Timeframes and classification of study outcomes

WHO defines reporting timeliness as the proportion of all expected reports in a reporting system received by a given date [17]. We evaluated the timeliness results of the notification system of each study according the following timeframes:

  1. The predefined timeframe: the timeliness criteria designated by the study itself. These are defined through legislation, local rules or by the authors of that specific study. In case authors used a different timeframe for analyzing than the mandatory timeframe, we followed the authors’ decision.
  2. The standardized timeframe: in order to analyze equally the relation between the timeliness outcomes and notification systems of the different studies, we defined as standardized timeframe: D1 ≤ 14 days, D2 ≤ 7 days, D3 (including D3P, D3X and D3P/X) ≤ 1 day, D4 + D5 (D4/5) ≤ 5 days and D1-5: ≤21 days. We chose rather strict delays for D3 and D4/5 as these can be reasonably achieved by a well-functioning notification system. Less strict delays were chosen for D1 and D2 as they are related to patient and doctor’s delay, availability and duration of laboratory test, which differ per infectious disease.
  3. The disease specific timeframe: as timely intervention to prevent or control an outbreak is disease specific, we defined disease specific median reporting delays between onset of disease and notification at the local health department (D1). These were calculated for timely control measures to reduce the proportion of infection caused by secondary cases to outbreak control levels (‘optimal’ and ‘suboptimal’ conditions) as determined by Bonacic et al [4]: for hepatitis A median ≤ 8 or ≤ 17 days, hepatitis B ≤ 1 or ≤ 42 days, measles ≤ 2 or ≤ 5 days, mumps ≤ 3 or ≤ 8 days, pertussis ≤ 4.5 days (only criteria for suboptimal conditions available) and for shigellosis ≤ 1 and ≤ 3 days.

Timeliness outcomes of the reporting system in each study were classified as follows: score ≥ 80% of notifications in time: ‘sufficient’, in line with the WHO JEE Tool which recommends timeliness of reporting at least 80% of all reporting units [14]. Scores between ≥50% and < 80% notifications in time were classified as ‘partly sufficient’, and scores < 50% of notifications in time as ‘insufficient’ as we consider the system functioning improperly when more than half of all notifications are not within the timeframe.

Several included studies presented timeliness outcomes of different delays in the notification system, outcomes of different (groups) of diseases or outcomes of different notification systems, within the same study. In case these outcomes involved different scores a mixed score was given: either ‘sufficient/partly sufficient’, or ‘sufficient/ insufficient’, or ‘partly sufficient/insufficient’. When different outcomes in time were reported in a follow-up study, we chose the most recent outcome for scoring as this usually was the best and final result of a notification system. In intervention studies, we chose the outcome of the most successful intervention for scoring. In case a study presented outcomes of multiple reporters in different geographic areas, the outcome ≥ 80% of the reporters was used for scoring.

Subsequently, factors associated with timely notification systems were assessed. In addition, in studies comparing different notification systems, these outcomes were assessed separately. In intervention studies, timeliness of the different reporting systems was compared to identify factors related to timeliness.

Results

An overview of the search process is depicted in the flowchart in Fig 3. In total 48 articles were included in the review [3, 911, 13, 15, 16, 1858]. An overview of study characteristics and results is shown in Table 1. The articles involve notification systems in 17 countries, mainly Northern America (United States 20 studies), Europe (14 studies) and East Asia (6). The majority of the studies (27 studies) analyze the timeliness of notification of one specific infectious disease, either in a disease specific notification system (13) or a generic notification system (14). Groups of infectious diseases were analyzed in 21 studies, one study analyzed timeliness of reporting of several syndromes.

There were 40 evaluation studies, of which 19 studies included a comparison of notification methods, and 8 intervention studies. Mandatory reporting is most common in notification systems (42 studies), next to voluntary reporting (3 studies) or a combination of both (3 studies). Most studies described reporting at local level (L1, 31 studies), followed by a combination of local and regional/national level (L1-L2 and/or L1-L2-L3, 13 studies). Four studies report on regional and/or national level (L2 and/or L2-L3). The studies analyzed conventional reporting methods (13 studies), electronic reporting (10), a combination of both (20), or mobile phone reporting (2). Three studies did not provide information on the reporting methodology, and were excluded in the analyses of timeliness related to reporting systems. Reporting delay on local level, including the delay between physician or laboratory to the local health department after laboratory confirmation (D3) was studied (43 studies) most often. Only 5 studies focused on delay towards regional or national level. An overview of delays reported in 48 articles is illustrated in the supporting information S1 table.

Timeliness

Out of 48 studies, 39 provided a predefined timeframe. Nine studies without predefined timeframe provided a comparison between outcomes of different notification systems. In total 35 out of 39 studies with a predefined timeframe referred to a quantitative, and 4 studies to a qualitative timeframe (‘immediate’, ‘as soon as possible’), see Table 1. Quantitative timeframes involved numbers of days/weeks/months, incubation periods per infectious disease (3 studies), or period for effective post exposure prophylaxes for contacts (1 study). The most common predefined timeframe for D3 P/X was reporting ≤ 1 day (12 studies), or ≤ 32–48 hour (3 studies). Predefined timeframes for notification on local level varied considerably between ≤ 1 day and ≤ 3 weeks, on regional/national level between ≤ 1 day and ≤ 2 months.

In 11 of the 39 studies (28%), notification delays met the predefined timeframe, in 12 (31%) not, and in the other 16 studies the outcomes were partly sufficient (8, 21%) or a mixed score (8, 21%). In Fig 4 these outcomes are visualized according to the delay described in a study, including information on the notification system. Notification systems involving the laboratory (D3X or D3X/P) showed the best results: 3 out of 4 (D3X) and 5 out of 7 (D3P/X) studies had sufficient or mixed sufficient/partly sufficient timeliness according their predefined timeframe. Notification systems only involving physicians (D3P) showed least favourable results: in 5 out of 10 studies the timeliness was insufficient according their predefined timeframes.

thumbnail
Fig 4. Overview scores according predefined and standardized timeframes.

https://doi.org/10.1371/journal.pone.0198845.g004

In 34 of these 39 studies, information on the notification system(s) was provided and involved in 13 studies conventional methods, in 10 studies electronic methods, in 9 studies a combination, and in 2 studies mobile phone reporting. As shown in Fig 4, there appeared to be no relation between notification system and score. Of the eleven studies were the notification system scored sufficient, four studies used a longer predefined timeframe for delays for D3 and D4-5 [29, 34, 37, 45], and four studies with a strict predefined timeframe (D3 or D4 <1 day) used an electronic notification system. Three of the latter studies were conducted in East Asian countries. Both Chinese and the Taiwanese studies revealed sufficient notification [34, 35, 53, 56]. In the 12 studies with an insufficient notification system, three out of four studies with a strict timeframe (D3 < 1 day) used either conventional or electronic reporting [9, 23, 41]. Notification systems in three out of these 12 studies with insufficient scores described D1, i.e. 25%, [3, 39, 47], while in total only 5/38 studies included D1, i.e. 13%. Eight out of twelve studies were from Europe (Italy, UK, Sweden and Switzerland) [9, 20, 23, 24, 41, 47, 48, 54].

For analyzing notification systems according to standardized timeframes, 45 studies were included (Fig 4). In 13 studies (29%), the system was scored sufficient, in 15 studies (33%) not, and in the other 17 studies the outcomes were partly sufficient (13, 29%) or a mixed score (4, 9%). 8 studies scored better related to the standardized timeframe, 8 studies scored worse. Sufficient notification systems frequently involved D1, D2 and D3X (8/13). Insufficient notification systems involved frequently physicians (D3P) (7/15) and public health authorities D4-5 (5/15). In parallel with the outcomes of the predefined timeframes, no clear relation between scoring result and notification system could be observed. Although the distribution of outcomes in both timeframes was comparable (24/38), some studies did score differently according to predefined or standardized timeframes: 3/12 studies scoring an insufficient notification system for the predefined timeframe improved in scoring for the standardized, while 3/11 studies changed from sufficient to partly or not sufficient.

With regard to the disease specific timeframe, 8 studies provided information regarding delay D1 for one or more specific diseases. In none of them the notification system was timely enough for optimal outbreak control. Suboptimal outbreak control was shown for notification systems for hepatitis A [10, 33, 46], hepatitis B [25] and measles [11]. However the system was insufficient for outbreak control in most studies: hepatitis A [25], measles [21, 25, 46, 47], pertussis [10] and shigelloses [15, 46, 52].

Comparison and intervention studies

In 13 studies timeliness of electronic systems was compared with conventional systems. In the majority (10/13) electronic reporting was faster than conventional reporting, improving timeliness with days (range 0–11) [911, 13, 33, 40, 44, 49], up to months [20, 30]. However, none of these studies fulfilled the predefined timeframe, and only 2 the standardized timeframe [10, 44]. In 3 studies, conventional reporting method was as fast as, or faster than electronic systems [26, 32, 50].

Six studies analyzed a variety of interventions in the notification systems: increased frequency (daily reporting[18]), sentinel lab surveillance [21], legal adjustments [24], training [25] and better facilities (fax), SMS text messages [43] and systematic monitoring delayed reports (conventional reporting) [51]. In all studies timeliness improved (range several days), however, none of the interventions resulted in sufficient timeliness for predefined or standardized timeframes.

Discussion

To our knowledge, this is the first systematic review assessing timeliness of notification systems. Thirty-nine out of 48 identified studies from 17 different countries provided quantitative data including a predefined timeframe. Timeliness of almost one third of the systems was sufficient, one third insufficient and the others partly sufficient, both for the predefined as the standardized timeframes. Reporting delay by laboratories, either combined with by physicians, was timelier than other delays in the notification chain in both timeframes. Outcomes were not related to notification systems. Although electronic systems were faster in comparative studies (10/13), this hardly resulted in sufficient scorings for theirs systems, neither according predefined nor standardized timeframes. The disease specific timeframe for optimal outbreak control was not met by any study.

Notification systems for infectious diseases are country, or even state/province, specific and therefore difficult to compare [3]. However, the studies in this review demonstrate that many components of the notification chain (Fig 1) are generic, including indicator based reporting on local, regional/national level, reporting by treating physician and/or diagnosing laboratory at local level, and mostly involving legally mandatory notification according to quantitative timeframes (hours, days, weeks). Remarkably, 29 out of the 48 studies involved the delays from physician and/or laboratory to the local health authorities (D3P, D3X or D3X/P). The predefined timeframes, either mandatory or chosen by the authors, for this delay where also quite comparable; for example 13 studies used a timeframe of ≤ 1 day. Nevertheless, differences in predefined timeframes do exist; therefore we introduced in this review a standardized timeframe per delay in order to compare notification timeliness between studies. We choose for standardized timeframes delays that were achievable. Eight studies had no timeframe. Although the overall outcome between using the predefined timeframe and the standardized timeframe was comparable, as is shown in Fig 4, the outcomes of over one third of the studies (14/38) changed by applying the standardized timeframe. In our opinion, the outcomes of applying a standardized timeframes are most representative in the appraisal of timeliness of a notification system.

It is remarkable that studies provide little background explanation about the designated timeframes, except when incubation periods are used, which are considered to be related to communicability and therefore critical when considering control measures [3, 46], or when timeframes related to measures such as post exposure prophylaxis are used [37]. The purpose of notification systems in general is an early warning system to identify outbreaks, to enable public health authorities to take corrective action through effective preventative and/or control measures, and to monitor the effect of implemented measures [48]. Timely notification for this purpose is disease specific, as we have demonstrated earlier [4]. Using available data on notification delays in the Netherlands for six person-to-person communicable diseases, at the time of reporting to local public health services, over 80% of secondary cases already were infected by the index. Therefore timely notification will mainly prevent tertiary, and further, cases. In none of the 8 studies in this review that provided relevant information regarding D1 medians for these 6 diseases, the notification system was timely for effective outbreak control. This might be one of the reasons why infectious diseases such as measles are difficult to control and still are endemic in many industrialized countries.

Another aspect is that only certain parts of the notification chain can be influenced through the notification system: mainly the reporting of a confirmed infectious disease from laboratory and/or physician to the local health department (D3), and from here to regional and/or national level (D4-D5). Timeliness outcomes for these delays were less sufficient than for D1-D2 in the standardized system. This review shows that many notification systems therefore can be improved to minimize delays D3 and D4-5. However, patient delay to consult a physician is not related to a notification system, neither the doctor’s delay in recognizing a disease. As patient, doctor’s and laboratory delays (D1,D2) take longer time than notification delays D3, D4, D5, (S 1 Table) optimizing notification systems will only partly optimize the timeliness of the entire notifications chain. Reduction of patient, doctor’s and laboratory delays, through increased awareness and enhanced availability of laboratory tests, is essential to substantially improve timeliness of the notification chain. This is certainly indicated in situations of increased threats. In such situations, also temporary conventional notification methods as telephone calls to the local health departments have an added value. Therefore decisions on investments in notifications systems should take into consideration the reduction in timeliness in D3, D4-D5 compared to potential reduction of D1-D2 and D3 (telephone) in case of specific health threats.

Although this was not the primary aim of the study, we identified the following facilitators and barriers related to timeliness outcomes of notification systems:

  1. 1. Concerning reporters (physicians, laboratories): facilitating factors: motivation, communication (between public health services and reporters), awareness raising, acceptance and simplicity of procedures and clinical guidelines, knowledge, training, phone call reminders, regular feedback [3, 9, 16, 25, 31, 32, 36, 38, 45, 54]. Barriers were lack of knowledge, lack of communication, uncertainty towards notification procedures [39, 45].
  2. 2. Available resources: availability of staff, technical facilities (fe fax) and rapid laboratory transport [25, 27, 42]. Barriers were different laboratory software among laboratories and using out-of-state laboratory facilities [38, 53, 57].
  3. 3. Notification procedures: unification of reporting times, legal adjustments of notification time, f.e. to frequency of reporting, a centralized data base, periodically evaluation of the system and analyses of delayed reports [15, 16, 18, 23, 24, 42, 47]. Barriers were administrative procedures and high volume of cases [39].
  4. 4. Others: higher number of notifiable cases during an epidemic was reported as barrier [28], but considered facilitating factor in others as extra supportive staff was made available. [27, 47] Public education is a facilitator to reduce patient delay [16].

Although we cannot come to conclusions to which extent these barriers and facilitators influence the timeliness of notification systems, it is obvious that addressing these aspects contribute to optimized functionality of the system.

Over the last two decades, several studies demonstrated the value of electronic reporting systems reducing notification delays [7]. However, over the last years, implementation of ER also revealed challenges. Gluskin et al. summarize in their systematic literature review that ELR, comparable with results of our study, reduces reporting time on average with 8.5 days (range 4–17 days) [59]. Besides increased volumes of incomplete notifications, coding of infectious diseases can be a challenge for laboratories when adjusting diagnostic tests, and for public health authorities whose computer systems have to keep up with de ELR codes. Also considerable information technology infrastructure, expertise and workforce need to be available for a good operating system, requiring substantial financial investments. The next step forward would be notifications through Electronic Medical Records (EMR), also requiring technical and financial investments, but addressing the physician reporting delay (D3P), which had the lowest scores in timeliness in our review. This system also can combine clinical systems and several laboratory tests resulting in notifications complying with case definitions which will reduce the workload for both public health services and physicians considerably. [60] Another interesting development in rural, resource poor settings is the use of mobile phone reporting. The studies of Quan et al and Rosewell et al showed that mobile phone reporting using SMS, shortened reporting time compared with conventional paper-based reporting and follow up from 37 to 7 days (medians) and from 84 to 2.4 days (averages) in South Africa and Papua New Guinea respectively [43, 58]. This methodology is simple, user friendly, reliable, and technically feasible in rural areas. It might be interesting to consider the use of mobile phone texting in addition to existing sophisticated notification systems in situations of newly emerging diseases or enhanced surveillance in high income countries as well.

Limitations

Studies used different parameters to calculate timeliness of their notification systems. In case the median, percentiles or means were used, we had to classify the score according to the percentage of notifications within timeframes. In case of doubt, or when the score was close to the cutoff of 50% or 80%, a second author was consulted to come to a decision. Also the opinion of the authors of the study reflected in the paper was used to come to a score.

Some studies used the delay between specimen collection at the laboratory and notification at the local health departments. These delays were included as D2 as well, even though the test result was not yet available, in order to limit the number of different delays used in this study. It is noteworthy that 8 studies, while presenting the delays of their notification system, did not include a predefined timeframe, either mandatory or chosen by the authors of the study. Also in several studies there was a difference between the mandatory timeframe and the timeframe chosen by the authors, without explanation. A realistic mandatory timeframe should be developed. It might be good to add a standardized timeframe, at least for D3 and D4-5, mostly affected by the notification system, in the Joint External Evaluation tool by the WHO.

The cut-offs in the scoring and delays in the standardized timeframe have been chosen on the above described grounds, but still are based on the opinions of the authors of this study. We consider 80% of timely notifications demonstrating a sufficient system, and 80% is in line with the WHO standard for an indicator based surveillance system [14], however an early warning system with 1/5 notifications not timely can cause considerable effect on effective control measures. When applying a 90% score as sufficient, the studies with sufficient timely notification systems almost halved from 11 (29%) to 6 (16%). Therefore we comply with the WHO standard.

In several articles, different notification systems were mentioned, both conventional and electronic, without clarifying which notifying organization used which system. In that case we classified the system as combined conventional/electronic (C/E). Therefore, with the limited number of selected articles, this review might not have shown an existing difference between conventional and electronic systems.

Lastly, we did not include completeness of notifications (percentage notified diseases) or completeness of information provided in the notification. We are aware that certain aspects of notification systems facilitate completeness, for example ELR towards notification completeness, and physician reporting to completeness of information provided. We refer readers to the many articles and reviews written on this subject.

Conclusion

This systematic review shows that a minority of notification systems meet either predefined, standardized or disease specific timeframes. Systems which include laboratory reporting, either combined with reporting by physicians, are more often associated with timely notification. Electronic reporting systems are not associated with sufficient timeliness of notifications, while they need a considerable investment. And, even when fully implemented, they will only reduce a part of the notification chain, excluding D1-D2. Therefore, during outbreak threats, patient, doctors and laboratory testing delays need to be reduced to achieve timely detection and notification. Conventional reporting methods, like phone calls, and mobile phone texting, still can play an important role, besides alerting potential patients, physicians, and provision of appropriate laboratory test. Public health authorities should be aware of these aspects and incorporate contingency systems for enhanced notification in their preparedness plans.

Supporting information

S1 Table. Delays of notifications system per study and timeliness of notification system according author’s predefined timeframe.

https://doi.org/10.1371/journal.pone.0198845.s001

(TIFF)

Acknowledgments

The authors thank Dr. Susan Hahné (RIVM), for critical reading of the manuscript.

References

  1. 1. Gibbons CL, Mangen MJ, Plass D, Havelaar AH, Brooke RJ, Kramarz P, et al. Measuring underreporting and under-ascertainment in infectious disease datasets: a comparison of methods. BMC Public Health. 2014;14:147. pmid:24517715; PubMed Central PMCID: PMCPMC4015559.
  2. 2. German RR, Lee LM, Horan JM, Milstein RL, Pertowski CA, Waller MN, et al. Updated guidelines for evaluating public health surveillance systems: recommendations from the Guidelines Working Group. MMWR Recomm Rep. 2001;50(RR-13):1–35; quiz CE1-7. pmid:18634202.
  3. 3. Jajosky RA, Groseclose SL. Evaluation of reporting timeliness of public health surveillance systems for infectious diseases. BMC Public Health. 2004;4:29. pmid:15274746.
  4. 4. Bonacic Marinovic A, Swaan C, van Steenbergen J, Kretzschmar M. Quantifying reporting timeliness to improve outbreak control. Emerg Infect Dis. 2015;21(2):209–16. pmid:25625374; PubMed Central PMCID: PMCPMC4313625.
  5. 5. WHO. International Health Regulations 2005. http://apps.who.int/iris/bitstream/10665/246107/1/9789241580496-eng.pdf?ua=1p
  6. 6. EU. EU-Decision_1082 on cross-border health treats. http://eceuropaeu/health/sites/health/files/preparedness_response/docs/decision_serious_crossborder_threats_22102013_enpdf.2013.
  7. 7. Wurtz R, Cameron BJ. Electronic laboratory reporting for the infectious diseases physician and clinical microbiologist. Clin Infect Dis. 2005;40(11):1638–43. pmid:15889362.
  8. 8. Effler P, Ching-Lee M, Bogard A, Ieong MC, Nekomoto T, Jernigan D. Statewide system of electronic notifiable disease reporting from clinical laboratories: comparing automated reporting with conventional methods.[erratum appears in JAMA 2000 Jun 14;283(22):2937]. JAMA. 1999;282(19):1845–50. pmid:10573276.
  9. 9. Jansson A, Arneborn M, Skärlund K, Ekdahl K. Timeliness of case reporting in the Swedish statutory surveillance of communicable diseases 1998–2002. Scandinavian journal of infectious diseases. 2004;36(11–12):865–72. pmid:15764175
  10. 10. Ward M, Brandsema P, Van Straten E, Bosman A. Electronic reporting improves timeliness and completeness of infectious disease notification, The Netherlands, 2003. Euro surveillance: bulletin européen sur les maladies transmissibles = European communicable disease bulletin. 2005;10(1):27–30.
  11. 11. Overhage JM, Grannis S, McDonald CJ. A comparison of the completeness and timeliness of automated electronic laboratory reporting and spontaneous reporting of notifiable conditions. American journal of public health. 2008;98(2):344. pmid:18172157
  12. 12. Steele L, Orefuwa E, Dickmann P. Drivers of earlier infectious disease outbreak detection: a systematic literature review. Int J Infect Dis. 2016;53:15–20. pmid:27777092.
  13. 13. Panackal AA, M'Ikanatha N M, Tsui FC, McMahon J, Wagner MM, Dixon BW, et al. Automatic electronic laboratory-based reporting of notifiable infectious diseases at a large health system. Emerg Infect Dis. 2002;8(7):685–91. pmid:12095435; PubMed Central PMCID: PMCPMC2730325.
  14. 14. WHO. Joint External Evaluation Tool. http://appswhoint/iris/bitstream/10665/204368/1/9789241510172_engpdf?ua=1.2016
  15. 15. Vogt RL, Spittle R, Cronquist A, Patnaik JL. Evaluation of the timeliness and completeness of a Web-based notifiable disease reporting system by a local health department. J Public Health Manag Pract. 2006;12(6):540–4. pmid:17041302.
  16. 16. Yoo H-S, Park O, Park H-K, Lee E-G, Jeong E-K, Lee J-K, et al. Timeliness of national notifiable diseases surveillance system in Korea: a cross-sectional study. BMC Public Health. 2009;9(1):93.
  17. 17. WHO. Protocol for the Assessment of National Communicable Disease Surveillance and Response Systems. WHO/CDS/CSR/ISR/2001.2. 2001.
  18. 18. Altmann M, Spode A, Altmann D, Wadl M, Benzler J, Eckmanns T, et al. Timeliness of surveillance during outbreak of Shiga Toxin-producing Escherichia coli infection, Germany, 2011. Emerg Infect Dis. 2011;17(10):1906–9. pmid:22000368; PubMed Central PMCID: PMCPMC3310688.
  19. 19. Begier EM, Barrett NL, Mshar PA, Johnson DG, Hadler JL, Team FER. Gram-positive rod surveillance for early anthrax detection. Emerg Infect Dis. 2005;11(9):1483–6. pmid:16229790
  20. 20. Carrieri M, Salmaso S, Bella A, D'Ancona F, Demicheli V, Marongiu C, et al. Evaluation of the SIMI system, an experimental computerised network for the surveillance of communicable diseases in Italy. European journal of epidemiology. 2000;16(10):941–7. pmid:11338126
  21. 21. Choe YJ, Eom HS, Bae GR, Cho SI. Timely measles surveillance in the Republic of Korea, 2002–2009: Impact of sentinel laboratory surveillance. Journal of medical virology. 2014;86(2):322–8. pmid:24027198
  22. 22. Curtis AB, McCray E, McKenna M, Onorato IM. Completeness and timeliness of tuberculosis case reporting. A multistate study. Am J Prev Med. 2001;20(2):108–12. pmid:11165451.
  23. 23. Day F, Sutton G. General practitioner notifications of gastroenteritis and food poisoning: cause for concern. Journal of public health. 2007;29(3):288–91. pmid:17622646
  24. 24. Freeman R, Charlett A, Hopkins S, O'Connell A, Andrews N, Freed J, et al. Evaluation of a national microbiological surveillance system to inform automated outbreak detection. Journal of Infection. 2013;67(5):378–84. pmid:23876330
  25. 25. Garcell HG, Hernandez TMF, Abdo EAB, Arias AV. Evaluation of the timeliness and completeness of communicable disease reporting: Surveillance in The Cuban Hospital, Qatar. Qatar medical journal. 2014;2014(1):50. pmid:25320693
  26. 26. Ghosh TS, Vogt RL. Active influenza surveillance at the local level: a model for local health agencies. American journal of public health. 2008;98(2):213. pmid:18172149
  27. 27. Goto DYK LL, Felix JVC, Kobayashi VL, Chaves MMN. Assessment of the timeliness for notification of dengue in the state of Paraná. Acta Paul Enferm 2016;29(3):355–62.
  28. 28. Grills NJ, Rowe SL, Gregory JE, Lester RA, Fielding JE. Evaluation of'Campylobacter'Infection Surveillance in Victoria. 2010.
  29. 29. Haller S, Eckmanns T, Benzler J, Tolksdorf K, Claus H, Gilsdorf A, et al. Results from the first 12 months of the national surveillance of healthcare associated outbreaks in Germany, 2011/2012. PloS one. 2014;9(5):e98100. pmid:24875674
  30. 30. Heisey-Grove DM, Church DR, Haney GA, DeMaria A Jr. Enhancing surveillance for hepatitis C through public health informatics. Public health reports. 2011;126(1):13. pmid:21337927
  31. 31. Huaman MA, Araujo-Castillo RV, Soto G, Neyra JM, Quispe JA, Fernandez MF, et al. Impact of two interventions on timeliness and data quality of an electronic disease surveillance system in a resource limited setting (Peru): a prospective evaluation. BMC medical informatics and decision making. 2009;9(1):16.
  32. 32. Johnson MG, Williams J, Lee A, Bradley KK. Completeness and timeliness of electronic vs. conventional laboratory reporting for communicable disease surveillance—Oklahoma, 2011. Public Health Rep. 2014;129(3):261–6. pmid:24791024; PubMed Central PMCID: PMCPMC3982541.
  33. 33. Kite-Powell A. HJ, Hopkins RS., DePasquale JM., Potential Effects of Electronic Laboratory Reporting on Improving Timeliness of Infectious Disease Notification—Florida, 2002–2006. CDC MMWR. 2008;57(49):1325–8. pmid:19078921
  34. 34. Lo H-Y, Yang S-L, Chou P, Chuang J-H, Chiang C-Y. Completeness and timeliness of tuberculosis notification in Taiwan. BMC public health. 2011;11(1):915.
  35. 35. McKerr C, Lo YC, Edeghere O, Bracebridge S. Evaluation of the national Notifiable Diseases Surveillance System for dengue fever in Taiwan, 2010–2012. PLoS Negl Trop Dis. 2015;9(3):e0003639. pmid:25794177; PubMed Central PMCID: PMCPMC4368052.
  36. 36. Mlynarski D, Rabatsky-Ehr T, Petit S, Purviance K, Mshar P, Begier E, et al. Evaluation of Gram-positive rod surveillance for early anthrax detection. Epidemiology and infection. 2009;137(11):1623–30. pmid:19397835
  37. 37. Moore KM, Reddy V, Kapell D, Balter S. Impact of electronic laboratory reporting on hepatitis A surveillance in New York City. J Public Health Manag Pract. 2008;14(5):437–41. pmid:18708886.
  38. 38. Murray EL, Samuel MC, Brodsky J, Akiba CF, King C, Li M, et al. Neisseria gonorrhoeae Outbreak: Unintended Consequences of Electronic Medical Records and Using an Out-of-State Laboratory—California, July 2009–February 2010. Sexually transmitted diseases. 2013;40(7):556–8. pmid:23965770
  39. 39. Nazzal ZA, Said H, Horeesh NA. Measles surveillance in Qatar, 2008: quality of surveillance data and timeliness of notification. East Mediterr Health J. 2011;17(11):813–7. pmid:22276487.
  40. 40. Nguyen TQ, Thorpe L, Makki HA, Mostashari F. Benefits and barriers to electronic laboratory results reporting for notifiable diseases: the New York City Department of Health and Mental Hygiene experience. American journal of public health. 2007;97(Supplement_1):S142–S5.
  41. 41. Paranthaman K, Kent L, McCarthy N, Gray S. Invasive meningococcal disease: Completeness and timeliness of reporting of confirmed cases in Thames Valley, 2006–2007. Public health. 2009;123(12):805–8. pmid:19958917
  42. 42. Pascopella L, Kellam S, Ridderhof J, Chin DP, Reingold A, Desmond E, et al. Laboratory reporting of tuberculosis test results and patient treatment initiation in California. Journal of clinical microbiology. 2004;42(9):4209–13. pmid:15365013
  43. 43. Quan V, Hulth A, Kok G, Blumberg L. Timelier notification and action with mobile phones-towards malaria elimination in South Africa. Malaria journal. 2014;13(1):1–8.
  44. 44. Rajeev D, Staes C, Evans RS, Price A, Hill M, Mottice S, et al., editors. Evaluation of HL7 v2. 5.1 electronic case reports transmitted from a healthcare enterprise to public health. AMIA Annual Symposium Proceedings; 2011: American Medical Informatics Association.
  45. 45. Ratnayake R, Allard R. Challenges to the Surveillance of Meningococcal Disease in an Era of Declining Incidence in Montréal, Québec. Can J Public Health. 2013;104(4):e335–e9. pmid:24044476
  46. 46. Reijn E, Swaan CM, Kretzschmar ME, van Steenbergen JE. Analysis of timeliness of infectious disease reporting in the Netherlands. BMC public health. 2011;11(1):409.
  47. 47. Richard J-L, Vidondo B, Mäusezahl M. A 5-year comparison of performance of sentinel and mandatory notification surveillance systems for measles in Switzerland. European journal of epidemiology. 2008;23(1):55–65. pmid:17899399
  48. 48. Riera-Montes M, Velicko I. The Chlamydia surveillance system in Sweden delivers relevant and accurate data: results from the system evaluation, 1997–2008. Euro Surveill. 2011;16(27):19907. pmid:21794217
  49. 49. Samoff E, Fangman MT, Fleischauer AT, Waller AE, MacDonald PD. Improvements in timeliness resulting from implementation of electronic laboratory reporting and an electronic disease surveillance system. Public Health Reports. 2013;128(5):393. pmid:23997286
  50. 50. Severi E, Dabrera G, Boxall N, Harvey-Vince L, Booth L, Balasegaram S. Timeliness of Electronic Reporting and Acceptability of Public Health Follow-Up of Routine Nonparatyphoidal and Nontyphoidal Salmonella Infections, London and South East England, 2010 to 2011. Journal of Food Protection®. 2014;77(1):94–9.
  51. 51. Silin M, Laraque F, Munsiff SS, Crossa A, Harris TG. The impact of monitoring tuberculosis reporting delays in New York City. J Public Health Manag Pract. 2010;16(5):E09–17. pmid:20689383.
  52. 52. Stachel AG, Waechter H, Bornschlegel K, Reddy V, Hanson H, Wen T, et al. Reassessing provider reporting in the age of electronic surveillance. Journal of Public Health Management and Practice. 2014;20(2):240–5. pmid:24458313
  53. 53. Sun JL, Zhou S, Geng QB, Zhang Q, Zhang ZK, Zheng CJ, et al. Comparative evaluation of the diagnosis, reporting and investigation of malaria cases in China, 2005–2014: transition from control to elimination for the national malaria programme. Infect Dis Poverty. 2016;5(1):65. pmid:27349745; PubMed Central PMCID: PMCPMC4924285.
  54. 54. Tosti M, Longhi S, de Waure C, Mele A, Franco E, Ricciardi W, et al. Assessment of timeliness, representativeness and quality of data reported to Italy's national integrated surveillance system for acute viral hepatitis (SEIEVA). Public health. 2015;129(5):561–8. pmid:25795017
  55. 55. Troppy S, Haney G, Cocoros N, Cranston K, DeMaria A Jr. Infectious disease surveillance in the 21st century: an integrated web-based surveillance and case management system. Public Health Reports. 2014;129(2):132. pmid:24587547
  56. 56. Xiaoqiang L, Chongsuvivatwong V, Jiraphongsa C, Lin L, Jun Y, Qiongfen L, et al. Evaluation of hepatitis A surveillance data and outbreak detection in Yunnan Province, China, from 2004 through 2009. Southeast Asian J Trop Med Public Health. 2011;42:839–50. pmid:22299466
  57. 57. Zucs A, Benzler J, Krause G. Mandatory disease reporting by German laboratories: a survey of attitudes, practices and needs. Euro Surveill. 2005;10(1):26–7. pmid:15701938
  58. 58. Rosewell A, Ropa B, Randall H, Dagina R, Hurim S, Bieb S, et al. Mobile phone-based syndromic surveillance system, Papua New Guinea. Emerg Infect Dis. 2013;19(11):1811–8. pmid:24188144; PubMed Central PMCID: PMCPMC3837650.
  59. 59. Gluskin RT, Mavinkurve M, Varma JK. Government leadership in addressing public health priorities: strides and delays in electronic laboratory reporting in the United States. Am J Public Health. 2014;104(3):e16–21. pmid:24432922; PubMed Central PMCID: PMCPMC3953791.
  60. 60. Klompas M, Haney G, Church D, Lazarus R, Hou X, Platt R. Automated identification of acute hepatitis B using electronic medical record data to facilitate public health surveillance. PLoS One. 2008;3(7):e2626. pmid:18612462