Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Estimating force of infection from serologic surveys with imperfect tests

  • Neal Alexander ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    neal.alexander@lshtm.ac.uk

    Affiliation MRC International Statistics and Epidemiology Group, London School of Hygiene and Tropical Medicine, London, United Kingdom

  • Mabel Carabali,

    Roles Data curation, Investigation, Resources, Validation, Writing – review & editing

    Affiliation Department of Epidemiology, Biostatistics and Occupational Health, McGill University, Montreal, Quebec, Canada

  • Jacqueline K. Lim

    Roles Conceptualization, Investigation, Validation, Writing – review & editing

    Affiliation Global Dengue and Aedes-transmitted Diseases Consortium (GDAC), International Vaccine Institute, Seoul, Korea

Abstract

Background

The force of infection, or the rate at which susceptible individuals become infected, is an important public health measure for assessing the extent of outbreaks and the impact of control programs.

Methods and findings

We present Bayesian methods for estimating force of infection using serological surveys of infections which produce a lasting immune response, accounting for imperfections of the test, and uncertainty in such imperfections. In this estimation, the sensitivity and specificity can either be fixed, or belief distributions of their values can be elicited to allow for uncertainty. We analyse data from two published serological studies of dengue, one in Colombo, Sri Lanka, with a single survey and one in Medellin, Colombia, with repeated surveys in the same individuals. For the Colombo study, we illustrate how the inferred force of infection increases as the sensitivity decreases, and the reverse for specificity. When 100% sensitivity and specificity are assumed, the results are very similar to those from a standard analysis with binomial regression. For the Medellin study, the elicited distribution for sensitivity had a lower mean and higher variance than the one for specificity. Consequently, taking uncertainty in sensitivity into account resulted in a wide credible interval for the force of infection.

Conclusions

These methods can make more realistic estimates of force of infection, and help inform the choice of serological tests for future serosurveys.

Introduction

The force of infection, or the rate at which susceptible individuals become infected, is an important public health measure of the speed and extent of an epidemic. It can be used to quantify the impact of disease control programs, and prioritize and identify geographical regions requiring further measures, such as vaccine implementation [15]. For infections inducing a lasting immune response, the force of infection is usually estimated via serological surveys (‘serosurveys’) of immunological status. Ideally, assays used in such surveys should be highly sensitive and specific, while also suitable for high throughput in terms of cost and personnel requirements [611]. In practice, however, available assays may not completely meet all these criteria, as is currently evident with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus responsible for coronavirus disease (COVID-19) [12]. The greater the imperfections in sensitivity and specificity, the less accurate will be the corresponding estimates of the force of infection, as long as such imperfections are not taken into account.

The force of infection may be estimated from single or repeated serosurveys. In the former case, the simplest analysis is to assume that the force of infection was constant over calendar time and age, and consider each person’s age to be their duration of exposure [13]. More sophisticated models allow for changing force of infection over time, or over age, or even allow for the presence of maternal antibodies if infants are included [4, 14]. Carrying out more than one survey in the same individuals provides more robust estimates of the force of infection during a given study period [4, 7]. Using repeated surveys, rate ratios can be obtained from binomial regression with complementary log-log link and the logarithm of the time between surveys as an offset [13]. While age is used as the time at risk in the analysis of a single survey, in repeated surveys it can be considered a risk factor like any other. However, whatever the number of surveys, errors in test status are usually ignored, whether analysing one or more surveys. In particular, for repeated surveys, individuals testing positive at baseline are usually considered no longer at risk [1, 4, 7], ignoring the possibility that they were false positives.

The choice of assay may substantially affect the study’s interpretation [15]. Various methods have taken into account certain kinds of test imperfection, for either single or repeated surveys. In particular, Trotter & Gay [16] developed a compartmental model of multiple surveys, in which the force of infection and imperfect sensitivity were estimated for Neisseria meningitidis. For a single survey, Alleman et al. [17] and Hachiya et al. [18] estimated the force of infection, and simultaneously test sensitivity for rubella and measles, by assuming that imperfect sensitivity was the reason for seroprevalence not necessarily reaching 100% at the highest ages. Tan et al. [19] used a model for dengue, in which sensitivity reduces over time as antibody levels decrease, applied to two independent population serosurveys from blood donors. Olive et al. [20] estimated the force of infection for Rift Valley fever based on fixed values of sensitivity and specificity for a single survey.

Here we provide methods to estimate force of infection, from a single serosurvey or two serosurveys in the same individuals, accounting for imperfections in sensitivity and/or specificity, and for uncertainty in these parameters.

Methods

We started from methods for estimating prevalence based on an imperfect diagnostic test, as reviewed by Lewis & Torgerson [21], and use similar notation. Estimation is done using a Bayesian framework and Markov chain Monte Carlo (MCMC) [22]. We assume that the immune response being measured is long-lasting so that, for example, apparent seroreversions, i.e. changes over time from positive to negative, are due to test errors, rather than loss of immunity. We use “seroprevalence” to mean the proportion of individuals with the underlying immune response, which the diagnostic tests measure with error.

Model for single serosurvey

For a diagnostic test, the sensitivity is the proportion of true positives that are correctly identified by the test, and the specificity is the proportion of true negatives that are correctly identified by the test [23]. The probability of testing positive (T+) is specified as a function of the unobserved true status (π), and the assumed values for sensitivity (Se) and specificity (Sp):

Then, representing a constant seroconversion rate, a binomial regression is specified with a complementary log-log link, and the logarithm of age as an offset. The only other term in the model is an intercept, which is the logarithm of the force of infection [13]. As a rate, the force of infection can take non-negative values, possibly greater than 1 [24]. A vague prior distribution—Gaussian with mean zero and standard deviation 1,000—is specified for the logarithm of the force of infection. Example data from a single serosurvey of dengue are from Colombo, Sri Lanka, which used a capture enzyme-linked immunosorbent assay (ELISA) to detect immunoglobulin G (IgG) [14, 25]. Here we omit individuals aged less than six months in order to limit the influence of maternal antibodies.

Model for two consecutive serosurveys in the same individuals

For two repeated serosurveys, priors are placed on the seroprevalences, and the values of interest are related via standard identities. The baseline seroprevalence is assigned a beta distribution with both parameters equal to 1, i.e. uniform on the interval [0, 1]. The prior for the second seroprevalence is the same except that, consistent with the above assumptions, it is constrained to be at least as high as the baseline seroprevalence. For each survey, the positive and negative predictive values (PPV and NPV, respectively) are defined in terms of the seroprevalence and the assumed sensitivity and specificity [26]:

The probability of each person being truly seropositive, Prob(D+), is then PPV if the test is positive, and 1 − NPV if the test is negative. The probabilities of testing positive or negative are functions of sensitivity and specificity, in the same way as for a single survey. Finally, the numerator of the force of infection is estimated as the increase in expected number of true positives from the first to the second survey, and the denominator is estimated as the expected person-time at risk, calculated as the sum of the individual times between the surveys, weighted by each individual’s probability of being seronegative at baseline. This is shown in the following equation, where the sum is over all individuals in both surveys, and the subscripts on D indicate the first or second survey:

This is shown schematically, as a Directed Acyclic Graph [27], in Fig 1. Example data are from a community-based study of dengue in Medellin, Colombia, using a commercially available IgG indirect ELISA test [25] (S5 File). Residents were randomly selected, and tested in up to five surveys over time. For the current purpose, we use only the first survey, done in 2011, and the last one, done in 2014, approximately 26 months later.

thumbnail
Fig 1. Directed Acyclic Graph (DAG) for the model for repeated serosurveys.

The large rectangles show individuals nested within surveys. Both surveys and individuals have multiple stacked rectangles to show that there is more than one of each. The smaller rectangles represent data (results and times of tests) or model inputs (sensitivity and specificity). The other nodes are functions of the data, or of the unobserved seroprevalence, which is given a beta(1,1), i.e. uniform, prior. For individuals, Prob(T+) indicates the probability of a positive test result and Prob(D+) indicates the probability of being truly seropositive.

https://doi.org/10.1371/journal.pone.0247255.g001

In the standard binomial regression model for seroconversion across paired surveys, those individuals positive at baseline are assumed to be not at risk, i.e. there is no allowance for measurement error in the serostatus. By contrast, as well as seroconversion, the current model allows seroreversion, i.e. for individuals to change from seropositive to seronegative status.

Uncertainty in sensitivity and specificity

Fixed values for sensitivity and specificity can be used for the repeated surveys, as described above for a single one. However, there may be reasonable doubt as to the exact values of sensitivity and specificity, e.g. because of cross-reacting pathogens circulating to an unknown extent. This uncertainty may have been quantified by systematic reviews, although their generalizability to a given setting may be doubtful. Another way to quantify uncertainty is in terms of expert opinion, e.g. via the Delphi technique [28]. Here we follow the elicitation method of Johnson et al. [29]. For each parameter, each expert is presented with a range of values. For the current purpose, the parameters are sensitivity and specificity, each with a range of 0 to 100%, in intervals (“bins”) of 5%. Each expert is invited to i) make a point or “average” estimate of the parameter in question, then ii) indicate the upper and lower limit of their estimate, then iii) indicate their weight of belief by allocating a total of 100% over the bins, between the upper and lower limits, in units of 5%. So we have a total of six questions: three each for sensitivity and specificity. Johnson et al., used paper questionnaires and stickers for the units of 5% weight of belief. We adapted this to a spreadsheet in Microsoft Excel (S1 File). This approach could also be applied to the analysis of a single survey.

For the current study, beliefs were elicited from three dengue researchers whose published work includes results of diagnostic tests. Two of these (JKL & MC) were also investigators of the serological study in Medellin [25]. In the case of dengue, one important consideration is whether the test in question may cross-react with other flaviviruses [30], or have lower specificity in those who have been vaccinated against them [31]. The elicited distributions for sensitivity and specificity are used here to illustrate the current method and are not conclusive in terms of the performance of the test in question. Also, the belief distributions for other diagnostic tests and other settings will vary.

The beliefs of the three experts were summarized as a single distribution using linear pooling [32], and a smooth distribution between 0 and 1 was fitted to the result. Both beta and logistic-normal distribution families were used: each has two parameters, which were fitted by the method of moments [33]. The beta distribution was used for the estimation of the force of infection.

More broadly, some models for sensitivity and specificity are unidentifiable [34, 35], i.e. not all the parameters can be estimated independently. For the current purpose, the estimated force of infection is evidently strongly associated with the sensitivity and specificity. Although posterior likelihoods of sensitivity, specificity and force of infection could be obtained from a formally consistent Bayesian model, the identifiability of such a model would need to be demonstrated. Our interest here is in information on sensitivity and specificity as inputs, not outputs. Hence, we have not referred to the elicited beliefs for sensitivity and specificity as “priors”. Although these beliefs are used in Monte Carlo simulation, posterior likelihoods are not obtained for them. Rather, values are repeatedly sampled from the fitted beta distributions of sensitivity and specificity, then MCMC estimation is done based on those values.

Credible intervals for a parameter are estimated as quantiles of samples drawn, via MCMC, from its Bayesian posterior distribution. Confidence intervals (as opposed to credible intervals) are quoted from frequentist analyses which were carried out for comparison.

Software

We use the “rjags” package in R (version 3.6.3; The R Foundation for Statistical Computing). This package requires a separate installation of the JAGS package [36]. R code is provided in S2S4 Files. For the analysis of the Colombo survey, for each value of sensitivity and specificity used, a burn-in of 1,000 iterations was used, with estimates of the force of infection based on 50,000 iterations thinned by 10 (i.e. keeping every 10th result). For the analysis of the repeated surveys in Medellin, the following was done separately for sensitivity and specificity: 2,000 draws were made from the fitted beta distribution then, for each draw, there was a burn-in of 2,000 and the force of infection was estimated from 5,000 iterations thinned by 50. This was done separately for three scenarios): i) with sensitivity varying while holding specificity at 100%, ii) the reverse, and iii) with both parameters varying. Hence the distribution of the force of infection was estimated from 20,000 values. Assessment of convergence was done visually. For all MCMC models, the point estimate is taken to be the median of the iterative values and the 95% credible interval is from the 2.5th to 97.5th percentiles.

Ethical considerations

This Medellin study obtained ethical approval from the Ethics Committee of the University of Antioquia (reference 11-5-362) and the International Vaccine Institute (IVI, reference 2011-011). Data were made available without identifying information. Data from the Colombo study are included from a table in the previously published report [14].

Results

Single serosurvey

For the dengue study in Colombo [14], Fig 2 shows the fitted proportions of seropositive by age. Values of 85% sensitivity or specificity have been chosen to illustrate the method rather than on the basis of expert opinion or of comparison against a gold standard. However, they are in the range found for other dengue IgG ELISAs [37, 38]. As expected, imperfect sensitivity implies higher seroprevalence, and imperfect specificity the reverse. The consequences of other values of sensitivity and specificity are shown in Fig 3. The confidence bands reflect sampling variability in the data rather than uncertainty in the values of sensitivity and specificity. From a standard frequentist analysis with binomial regression, the estimated force of infection is 13.7% per year (95% confidence interval 12.4-15.2%). Fig 3 shows that, as expected, the results from the current model approach those from the standard analysis as sensitivity and specificity tend to 100%. For 100% sensitivity, the results from the current model are the same, and for 100% specificity, the point estimate is the same and the credible interval is 0.1% lower (12.3-15.1%).

thumbnail
Fig 2. Proportion seropositive for dengue by age in Colombo [14].

The data point from the six-month old children in the published table are included, but not those aged less than six months, due to maternal antibodies in the younger group. The solid line is the fit from a standard analysis assuming a perfectly sensitive and specific test. The upper dashed line is from an analysis assuming 85% sensitivity and 100% specificity, and the lower dashed line with these values exchanged.

https://doi.org/10.1371/journal.pone.0247255.g002

thumbnail
Fig 3. Relation between force of infection, sensitivity and specificity in the Colombo data.

The force of infection is estimated for each value of sensitivity or specificity, considered fixed. In this figure, when sensitivity is less than 100% then specificity is assumed to be 100%, and conversely. The grey zones are the 95% credible intervals. As sensitivity and specificity approach 100%, to the right side of the plot, the credible intervals approach the 95% confidence interval from standard binomial regression (vertical dashed line).

https://doi.org/10.1371/journal.pone.0247255.g003

Two consecutive serosurveys

In Medellin, 705 people had test results available for both surveys [25]. Of these, 260 originally tested negative, of whom 31 (11.9%) were positive on the second survey, approximately 26 months later. The remaining 445 originally tested positive, and all but four of these tested positive on the second survey. A standard frequentist binomial regression analysis, which was necessarily restricted to the 260 presumed at risk, estimates the force of infection as 5.9% per year, with a 95% confidence interval from 4.0 to 8.2%.

Fig 4 shows the elicited distributions for the sensitivity and specificity of the dengue IgG ELISA used in the Medellin study, summarizing the beliefs of the three experts by linear pooling. The distribution for specificity is closer to 100% and with lower variance than that for sensitivity. The figure also shows the fitted beta and logistic-normal distributions, the former of which was used to generate the force of infection results in Fig 5.

thumbnail
Fig 4. Uncertainty in a) specificity and b) sensitivity for Medellin study.

https://doi.org/10.1371/journal.pone.0247255.g004

thumbnail
Fig 5. Posterior distributions of force of infection under a) varying specificity and b) varying sensitivity.

https://doi.org/10.1371/journal.pone.0247255.g005

Results from the force of infection model are shown in Fig 5. As expected from Fig 4, the distribution of the force of infection taking into account uncertainty in specificity (Fig 5a) has smaller variation than that for sensitivity (Fig 5b). For the former, the point estimate of the force of infection is 4.8% per year, with a 95% credible interval from 3.7 to 6.2%. For varying sensitivity, the point estimate is much higher, at 13.4% per year, and the credible interval much wider: 5.5% to 43.1%. With sensitivity and specificity both varying, the results are qualitatively similar to Fig 5b (S1 Fig), the point estimate is 13.3% per year, and the credible interval is from 4.6% to 43.0%.

Discussion

The current method is applicable to infections which induce a lasting immune response, which includes many viral pathogens, such as rubella and measles [18], hepatitis B and C, HIV [39], as well as dengue. Not all assays are suitable for serological surveys. For example, the World Health Organization discourages the use of rapid tests in such studies of dengue [5], and the utility of serological assays for SARS-CoV-2 is currently being debated [12, 40]. Statistical methods can help quantify the degree of uncertainty that would arise from the use of any given test. Previous studies have simultaneously estimated test sensitivity and force of infection for single or repeated surveys [1618], and estimated the force of infection subject to fixed values for sensitivity and specificity in a single survey [20]. Here we present more general methods for estimating the force of infection taking into account imperfect sensitivity and/or specificity, and uncertainty in these parameters, for either single or repeated surveys. Should well-established and generalizable values of sensitivity and specificity be available, they can be used in the methods described here. However, this is not always the case. For example, for dengue, there may be cross-reaction with other flaviviruses [30], whose occurrence varies geographically.

The single serosurvey model, applied to the dengue study in Colombo [14], showed how the estimated force of infection depends on the assumed sensitivity and specificity. When perfect sensitivity and specificity are assumed, the results are effectively identical to those from the standard binomial regression. For the example of repeated serosurveys in Medellin [25], the elicited expert belief for the specificity was relatively precise, resulting in a fairly precise estimate of the force of infection (95% credible interval 3.7 to 6.2% per year). The belief for sensitivity was less precise and resulted in an interval estimate that was so wide (5.5 to 43.1%) as to potentially lack utility. The results from these two studies illustrate the method, but the force of infection values should not be taken as authoritative for the study settings.

We have opted for estimation in a Bayesian framework by MCMC [22]. The model for a single serosurvey is similar to that of Lewis et al. for prevalence [21], and may be soluble by direct application of maximum likelihood, hence avoiding the need for iterative sampling. The identifiability of some Bayesian models for the estimation of prevalence is affected by the choice of priors for sensitivity, specificity and other parameters: unsuitable priors can then give rise to erroneous conclusions [34]. Although it may be possible to ‘learn’ about both a) sensitivity and specificity and b) the force of infection, here we have avoided identifiability concerns via Monte Carlo simulation of uncertainty in sensitivity and specificity. In effect, the elicited distribution is both the prior and posterior distribution. This approach was shown for the model for repeated surveys but could equally be applied to the one for a single survey. It was illustrated by eliciting beliefs about sensitivity and specificity from three experts: to reach substantive conclusions on dengue, a wider and more systematic exercise would be required [29]. Estimates from systematic reviews could be used instead of expert opinion if they were generalizable to a given study area.

Future work could seek models with Bayesian priors for sensitivity and specificity, while still correctly estimating the force of infection. In the meantime, the use of Monte Carlo in an outer loop, with MCMC estimation each time, makes the analysis relatively time-consuming. Also, a reformulation would be required to allow the inclusion of covariates. The method is shown for two surveys, studies with more than two could be included, with each being constrained to have a seroprevalence no lower than the previous. Another limitation is the assumption that each individual has long-lasting immunity, so that apparent seroreversions are due to test errors rather than waning immunity. The validity of this assumption will depend on the infection in question, and possibly factors such as the time between surveys, and the age and immunocompetence of the participants.

In conclusion, the methods presented here can make more realistic estimates of force of infection, and can help inform the choice of serological tests for future serosurveys.

Supporting information

S1 File. Excel file for use in eliciting beliefs.

https://doi.org/10.1371/journal.pone.0247255.s001

(XLSX)

S2 File. R code for analysis of a single survey.

This uses data previously published from Colombo [14].

https://doi.org/10.1371/journal.pone.0247255.s002

(R)

S3 File. R code for analysis of repeated surveys.

This uses data from the study in Medellin [25] which are included in S5 File.

https://doi.org/10.1371/journal.pone.0247255.s003

(R)

S4 File. R utility functions used by the code in S2 and S3 Files.

https://doi.org/10.1371/journal.pone.0247255.s004

(R)

S5 File. CSV file with paired serological status data from the first and last surveys of the Medellin study.

The file has one row per person. The first column, “test0” is the ELISA result at the first survey (code 0 for negative, 1 for positive) and the second, “test1” is the result at the last survey.

https://doi.org/10.1371/journal.pone.0247255.s005

(CSV)

S1 Fig. Estimation of the force of infection in the Medellin study [25], with uncertainty in both sensitivity and specificity.

https://doi.org/10.1371/journal.pone.0247255.s006

(TIF)

Acknowledgments

We are grateful to Oliver Brady, Philippe Mayaud and Paul Mee for comments on the manuscript.

References

  1. 1. Sundaresan TK, Assaad FA. The use of simple epidemiological models in the evaluation of disease control programmes: a case study of trachoma. Bull World Health Organ. 1973;48:709–714. pmid:4544781
  2. 2. Remme J, Ba O, Dadzie KY, Karam M. A force-of-infection model for onchocerciasis and its applications in the epidemiological evaluation of the Onchocerciasis Control Programme in the Volta River basin area. Bull World Health Organ. 1986;64(5):667–681. pmid:3492300
  3. 3. Grenfell BT, Anderson RM. Pertussis in England and Wales: an investigation of transmission dynamics and control by mass vaccination. Proc R Soc Lond B Biol Sci. 1989;236(1284):213–52. pmid:2567004
  4. 4. Hens N, Aerts M, Faes C, Shkedy Z, Lejeune O, Van Damme P, et al. Seventy-five years of estimating the force of infection from current status data. Epidemiol Infect. 2010;138(6): 802–12. pmid:19765352
  5. 5. World Health Organization. Informing Vaccination Programs: A Guide to the Design and Conduct of Dengue Serosurveys. Geneva: World Health Organization; 2017.
  6. 6. Evans AS, Kaslow RA. Viral Infections of Humans: Epidemiology and Control. 4th ed. New York and London: Plenum Medical Book Company; 1997.
  7. 7. Osborne K, Gay N, Hesketh L, Morgan-Capner P, Miller E. Ten years of serological surveillance in England and Wales: methods, results, implications and action. Int J Epidemiol. 2000;29(2):362–8. pmid:10817137
  8. 8. Hobson-Peters J. Approaches for the development of rapid serological assays for surveillance and diagnosis of infections caused by zoonotic flaviviruses of the Japanese encephalitis virus serocomplex. J Biomed Biotechnol. 2012:379738. pmid:22570528
  9. 9. Laurie KL, Huston P, Riley S, Katz JM, Willison DJ, Tam JS et al. Influenza serological studies to inform public health action: best practices to optimise timing, quality and reporting. Influenza Other Respir Viruses. 2013;7(2):211–24. pmid:22548725
  10. 10. Elliott SR, Fowkes FJ, Richards JS Reiling L Drew DR Beeson JG. Research priorities for the development and implementation of serological tools for malaria surveillance. F1000Prime Rep. 2014;6:100. pmid:25580254
  11. 11. Raafat N, Blacksell SD, Maude RJ. A review of dengue diagnostics and implications for surveillance and control. Trans R Soc Trop Med Hyg. 2019;113(11):653–660. pmid:31365115
  12. 12. Winter AK, Hegde ST. The important role of serology for COVID-19 control. Lancet Infect Dis. 2020;20:758–759. pmid:32330441
  13. 13. Collett D. Modelling Binary Data, London: Chapman and Hall; 1991.
  14. 14. Tam CC, Tissera H, de Silva AM, De Silva AD, Margolis HS, Amarasinge A. Estimates of dengue force of infection in children in Colombo, Sri Lanka. PLoS Negl Trop Dis. 2013;7(6):e2259. pmid:23755315
  15. 15. Kafatos G, Andrews N, McConway KJ, Anastassopoulou C, Barbara C, De Ory F et al. Estimating seroprevalence of vaccine-preventable infections: is it worth standardizing the serological outcomes to adjust for different assays and laboratories? Epidemiol Infect. 2015;143(11):2269–78. pmid:25420586
  16. 16. Trotter CL, Gay NJ. Analysis of longitudinal bacterial carriage studies accounting for sensitivity of swabbing: an application to Neisseria meningitidis. Epidemiol Infect. 2003;130(2):201–5. pmid:12729188
  17. 17. Alleman MM, Wannemuehler KA, Hao L, Perelygina L Icenogle JP, Vynnycky E et al. Estimating the burden of rubella virus infection and congenital rubella syndrome through a rubella immunity assessment among pregnant women in the Democratic Republic of the Congo: Potential impact on vaccination policy. Vaccine. 2016;34(51):6502–6511. pmid:27866768
  18. 18. Hachiya M, Miyano S, Mori Y, Vynnycky E, Keungsaneth P, Vongphrachanh P et al. Evaluation of nationwide supplementary immunization in Lao People’s Democratic Republic: Population-based seroprevalence survey of anti-measles and anti-rubella IgG in children and adults, mathematical modelling and a stability testing of the vaccine. PLoS One. 2018;13(3):e0194931. pmid:29596472
  19. 19. Tan LK, Low SL, Sun H, Shi Y, Liu L, Lam S et al. Force-of-infection and true infection rate of dengue in Singapore—its implication on dengue control and management. Am J Epidemiol. 2019;188(8), 1529–1538. pmid:31062837
  20. 20. Olive MM, Grosbois V, Tran A, Nomenjanahary LA, Rakotoarinoro M, Andriamandimby SF et al. Reconstruction of Rift Valley fever transmission dynamics in Madagascar: estimation of force of infection from seroprevalence surveys using Bayesian modelling. Sci Rep. 2017;7:39870. pmid:28051125
  21. 21. Lewis FI, Torgerson PR. tutorial in estimating the prevalence of disease in humans and animals in the absence of a gold standard diagnostic. Emerg Themes Epidemiol. 2012;9(1):9. pmid:23270542
  22. 22. Gilks WR, Richardson S, Spiegelhalter DJ. Markov Chain Monte Carlo in Practice. London: Chapman and Hall; 1996.
  23. 23. Altman DG, Bland JM. Diagnostic tests 1: Sensitivity and specificity. BMJ. 1994;308(6943):1552. pmid:8019315
  24. 24. Vandenbroucke JP, Pearce N. Incidence rates in dynamic populations. Int J Epidemiol. 2012;41(5):1472–9. pmid:23045207
  25. 25. Carabali M, Lim JK, Velez DC, Trujillo A, Egurrola J, Lee KS, et al. Dengue virus serological prevalence and seroconversion rates in children and adults in Medellin, Colombia: implications for vaccine introduction. Int J Infect Dis. 2017;58:27–36. pmid:28284914
  26. 26. Altman DG, Bland JM. Diagnostic tests 2: Predictive values. BMJ. 1994;309(6947):102. pmid:8038641
  27. 27. Lesaffre E, Lawson AB. Bayesian Biostatistics. Chichester: John Wiley & Sons; 2012.
  28. 28. McMillan SS, King M, Tully MP. How to use the nominal group and Delphi techniques. Int J Clin Pharm. 2016;38(3):655–62. pmid:26846316
  29. 29. Johnson SR, Tomlinson GA, Hawker GA, Granton JT, Grosbein HA, Feldman BM. A valid and reliable belief elicitation method for Bayesian priors. J Clin Epidemiol. 2010;63(4):370–83. pmid:19926253
  30. 30. Allwinn R, Doerr HW, Emmerich P, Schmitz H, Preiser W. Cross-reactivity in flavivirus serology: new implications of an old finding? Med Microbiol Immunol. 2002;190(4):199–202. pmid:12005333
  31. 31. Schwartz E, Mileguir F, Grossman Z, Mendelson E. Evaluation of ELISA-based sero-diagnosis of dengue fever in travelers. J Clin Virol. 2000;19(3):169–73. pmid:11090753
  32. 32. McConway KJ. Marginalization and linear opinion pools. J Am Stat Assoc. 1981;76(374):410–414.
  33. 33. Brown D, Rothery P. Models in Biology: Mathematics, Statistics and Computing. Chichester: John Wiley & Sons; 1993.
  34. 34. McDonald JL, Hodgson DJ. Prior precision, prior accuracy, and the estimation of disease prevalence using imperfect diagnostic tests. Front Vet Sci. 2018;5:83. pmid:29868615
  35. 35. Gelman A, Carpenter B. Bayesian analysis of tests with unknown specificity and sensitivity. Applied Statistics. 2020;69(5):1269–1283.
  36. 36. Plummer M. JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In: Proceedings of the 3rd International Workshop on Distributed Statistical Computing. vol. 124. Vienna, Austria; 2003. p. 1-10.
  37. 37. Pal S, Dauner AL, Valks A, Forshey BM, Long KC, Thaisomboonsuk B, et al. Multicountry prospective clinical evaluation of two enzyme-linked immunosorbent assays and two rapid diagnostic tests for diagnosing dengue fever. J Clin Microbiol. 2015;53(4):1092–102. pmid:25588659
  38. 38. Blacksell SD, Jarman RG, Gibbons RV, Tanganuchitcharnchai A, Mammen MP, Nisalak A, et al. Comparison of seven commercial antigen and antibody enzyme-linked immunosorbent assays for detection of acute dengue infection. Clin Vaccine Immunol. 2012;19(5):804–10. pmid:22441389
  39. 39. Sutton AJ, Hope VD, Mathei C, Mravcik V, Sebakova H, Vallejo F, et al. A comparison between the force of infection estimates for blood-borne viruses in injecting drug user populations across the European Union: a modelling study. J Viral Hepat. 2008;15(11):809–816. pmid:18761605
  40. 40. Theel ES, Slev P, Wheeler S, Couturier MR, Wong SJ, Kadkhoda K. The role of antibody testing for SARS-CoV-2: is there one? J Clin Microbiol. 2020;.