Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

A natural history model for planning prostate cancer testing: Calibration and validation using Swedish registry data

  • Andreas Karlsson ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    andreas.a.karlsson@ki.se

    Affiliation Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden

  • Alexandra Jauhiainen,

    Roles Conceptualization, Methodology, Project administration, Software, Supervision, Writing – original draft, Writing – review & editing

    Affiliation AstraZeneca, Gothenburg, Sweden

  • Roman Gulati,

    Roles Conceptualization, Methodology, Software, Validation, Writing – review & editing

    Affiliation Division of Public Health Sciences, Fred Hutchinson Cancer Research Center, Seattle, United States of America

  • Martin Eklund,

    Roles Methodology, Writing – review & editing

    Affiliation Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden

  • Henrik Grönberg,

    Roles Resources, Writing – review & editing

    Affiliations Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden, Department of Urology, Karolinska University Hospital Solna, Stockholm, Sweden

  • Ruth Etzioni,

    Roles Conceptualization, Methodology, Software, Validation, Writing – review & editing

    Affiliations Division of Public Health Sciences, Fred Hutchinson Cancer Research Center, Seattle, United States of America, Department of Health Services, University of Washington, Seattle, United States of America, Department of Biostatistics, University of Washington, Seattle, United States of America

  • Mark Clements

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Software, Supervision, Validation, Writing – original draft, Writing – review & editing

    Affiliation Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden

Abstract

Recent prostate cancer screening trials have given conflicting results and it is unclear how to reduce prostate cancer mortality while minimising overdiagnosis and overtreatment. Prostate cancer testing is a partially observable process, and planning for testing requires either extrapolation from randomised controlled trials or, more flexibly, modelling of the cancer natural history. An existing US prostate cancer natural history model (Gulati et al, Biostatistics 2010;11:707-719) did not model for differences in survival between Gleason 6 and 7 cancers and predicted too few Gleason 7 cancers for contemporary Sweden. We re-implemented and re-calibrated the US model to Sweden. We extended the model to more finely describe the disease states, their time to biopsy-detectable cancer and prostate cancer survival. We first calibrated the model to the incidence rate ratio observed in the European Randomised Study of Screening for Prostate Cancer (ERSPC) together with age-specific cancer staging observed in the Stockholm PSA (prostate-specific antigen) and Biopsy Register; we then calibrated age-specific survival by disease states under contemporary testing and treatment using the Swedish National Prostate Cancer Register. After calibration, we were able to closely match observed prostate cancer incidence trends in Sweden. Assuming that patients detected at an earlier stage by screening receive a commensurate survival improvement, we find that the calibrated model replicates the observed mortality reduction in a simulation of ERSPC. Using the resulting model, we predicted incidence and mortality following the introduction of regular testing. Compared with a model of the current testing pattern, organised 8 yearly testing for men aged 55–69 years was predicted to reduce prostate cancer incidence by 14% and increase prostate cancer mortality by 2%. The model is open source and suitable for planning for effective prostate cancer screening into the future.

Introduction

Cancer screening policies must balance the benefits and potential harms based on uncertain and incomplete evidence. It is difficult to infer causation from observational data, and even large randomised screening studies provide limited evidence. Simulations using natural history models can provide further insights. Such natural history models describe the course of the disease from onset to progression through to death. Calibration of such models against observed disease incidence patterns with and without screening can be used to improve our understanding of the mechanisms for disease progression and cancer screening interventions. Simulations of the natural history of disease can be used to bring together evidence from specific randomised controlled trials with data from other sources and to generalise the results from specific population structures and disease prevalence [1, 2]. Finally, these simulations can also be used as a basis for cost-effectiveness analysis in order to make informed decisions on cancer screening interventions [35].

Our application relates to prostate cancer. The prostate is a male reproductive organ that, together with other glands, is responsible for the production of semen. As men age, they are more likely to suffer from prostate enlargement or prostate cancer. In Sweden, prostate cancer accounts for a third of male cancer diagnoses and a fifth of male cancer deaths [6].

Evidence from ERSPC suggests that PSA testing can reduce prostate cancer mortality by approximately 20% over 13 years [7]. There are two other large randomised studies of prostate cancer screening: the Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial did not find any significant reduction in prostate cancer mortality when the control arm included high levels of opportunistic PSA testing [8]. The recent Cluster Randomized Trial of PSA Testing for Prostate Cancer found that with 40% attending the clinical visit in the screening arm for a single PSA screen in men aged 50–69 years lead to an estimated mortality reduction of 4% at ten years [9]. Although PSA testing is common in many western countries, testing is not systematically organised, and the balance of harms versus benefits of PSA testing is uncertain [4].

PSA testing in Sweden continues to be common and new prostate cancer tests are becoming available. To assess whether organised prostate cancer testing would be beneficial, we sought to develop a well-validated prostate cancer simulation model for Sweden.

There are few existing models for the natural history of prostate cancer. However in order to make full use of the detailed longitudinal Swedish registers and allow the natural history model to represent the mortality rate ratio observed in the ERSPC [7], we adapted and extended an existing model [1012] with a more detailed natural history. This is important for modelling risk-stratified prostate cancer testing in combination with new screening tests.

Our objectives are to describe a contemporary, validated prostate cancer screening model and to apply that model to predict key screening outcomes under different PSA screening scenarios for Sweden. The longer-term goal is to use this simulation model to plan for better prostate cancer testing and screening.

Results

Model overview

We adapted a prostate cancer screening model from the Fred Hutchinson Cancer Research Center (FHCRC) [1012]. The FHCRC model simulated for individual life histories, coupling PSA trajectories with the disease onset and progression from localised to advanced disease by Gleason score (low-moderate versus high grade). The FHCRC model used inputs from the Prostate Cancer Prevention Trial (PCPT) [13] and US PSA test patterns [14], and was calibrated to US data before and after the introduction of PSA testing, and validated against (a) the prostate cancer mortality RR from the ERSPC screening trial and (b) US prostate cancer incidence. Starting with the model from [12], we re-implemented and extended the model to include additional states by T-stage/M-stage and Gleason scoring as shown in Fig 1 (more recent developments of the FHCRC model are described in the Discussion). We also used more detailed inputs for calibration and validation. Data sources for the model inputs, calibration and validation included: the National Patient Register (including data on cancer treatment), National Prostate Register (cancer incidence by Gleason score, T-stage and M-stage), Total Population Register (defining the at-risk population), Cause of Death Register, the Stockholm PSA and Biopsy Register (SPBR), and the PCBaSe research database for prostate cancer survival. Details are provided in the Materials and Methods.

thumbnail
Fig 1. Schematic of the prostate cancer natural history model reflecting disease onset, progression and survival in the absence of screening.

Individuals are assumed to be healthy at age 35 years; they may progress to preclinical cancer states with fixed Gleason score, but with progression by T-stage and to metastatic cancer; preclinical cancers may be clinically diagnosed from nine different states, with survival from prostate cancer death modelled from the time of clinical diagnosis; death due to other causes is represented as a competing event.

https://doi.org/10.1371/journal.pone.0211918.g001

Current PSA testing.

The Stockholm and Swedish calibration targets were from a PSA tested population. We therefore modelled PSA testing in this population using data from the SPBR (details are provided in PSA testing sub-model). The initial PSA test uptake was modelled with a log-logistic distribution by age and calendar period. The PSA re-testing was modelled by age groups and PSA values. The resulting PSA testing rates by age and calendar period are shown in Fig 2. The probability of having a biopsy following a positive PSA test (i.e. biopsy compliance) was modelled by age and PSA value using the SPBR (see Table B in S1 Appendix).

thumbnail
Fig 2. Modelled current PSA testing rates per person-year for ages 40-80 years and the calendar period 1995-2014 for men without an existing prostate cancer diagnosis.

The white contour lines indicates the rates 0.1, 0.2 and 0.3. The modelled values are based on data from the Stockholm PSA and Biopsy Register [15].

https://doi.org/10.1371/journal.pone.0211918.g002

Model calibration

Model calibration was undertaken in two steps. First, we calibrated for the relative distributions of incident cancers from contemporary Sweden and the screening effect on incidence from the ERSPC. Second, we calibrated to survival from contemporary Sweden.

Calibrating to the relative distributions of cancer staging.

The observed relative distributions of incident cancer stages at diagnoses were used as calibration targets for modelling several prostate cancer natural history parameters (Tables 1 and 2). Importantly, we modelled for transitions between T-stages and fitted the relative distributions for the Gleason scores and cancer T and M stages by age groups; see Fig 3. We included different T-stages to support more detailed modelling of treatment and survival. The use of relative proportions allows for the absolute incidence rates to be used for validation. The calibration used a reconstruction of a contemporary Swedish population with data on PSA test uptake, health state proportions at diagnosis, and survival from a screened population. A total of 4392 diagnoses in the ages 50–74 from a three-years interval (2011–2013) were used as calibration targets.

thumbnail
Table 1. Model inputs and targets used to adapt the model to the Swedish context.

https://doi.org/10.1371/journal.pone.0211918.t001

thumbnail
Table 2. Estimated parameters from calibration procedure.

https://doi.org/10.1371/journal.pone.0211918.t002

thumbnail
Fig 3. Fitted Gleason, T-stage and metastatic proportions of cases in 2011-2013 by age groups to that observed in the SPBR register.

https://doi.org/10.1371/journal.pone.0211918.g003

The calibration to cancer staging only included two parameters. As a consequence, the model predictions do not accurately reflect all of the variation by age, T-stage and Gleason score (Fig 3). In particular, the predicted proportions are less accurate at younger ages for T-stages 1–2.

Calibration of screening effects on incidence.

In addition to the calibration of the screened Swedish population, we also simulated for the screened and unscreened arms from the ERSPC to replicate results from the 13 years of follow-up [7]; for details on the reconstruction of the ERSPC trial, see Materials and methods. The ERSPC rate ratio of prostate cancer incidence (1.57, 95% confidence interval (CI) 1.51–1.62) was used in the calibration, while the ERSPC mortality RR of prostate cancer was used for validation. To calibrate to the ERSPC incidence rate ratio and to model indirectly for tumour size, we introduced a parameter for the proportion of the time from onset that a T1–T2 cancer would not be biopsy detectable [16]; we estimated that the T1–T2 cancers would on average be undetectable at biopsy for 53% of the time before they progressed to T3–T4 cancers.

Calibrating to survival from diagnosis.

To calibrate prostate cancer survival, we compared simulated survival from a contemporary Swedish population, including longitudinal screening and treatment patterns, with observed 10- and 15-year survival from the PCBaSe database [17, 18]. The PCBaSe database contained 93014 men from the Swedish National Prostate Cancer Register diagnosed in the period 1998–2014 linked with the health and population registers. Prostate cancer survival was stratified by M stage, Gleason score (≤ 6, 7, ≥ 8), PSA (< 10, ≥ 10) and ten-year age groups. Predictions from the calibrated model are displayed as Kaplan-Meier survival curves and the observed 10- and 15-year survival are displayed as point estimates with 95% CIs in Fig 4. The calibrated model has a clear separation in survival for men diagnosed with either Gleason 6 or Gleason 7 cancers; see Fig B in the S1 Appendix for a comparison with the FHCRC model, where survival tends to be similar for Gleason 6 and 7 cancers. For the FHCRC model, systematic differences in survival for Gleason 6 and 7 cancers, which are indirectly driven by differences in PSA growth rates and treatment assignment by Gleason score, are modest and appear to be swamped by Monte Carlo variation in our simulations.

thumbnail
Fig 4. Simulated survival from the calibrated model displayed as Kaplan-Meier survival curves together with the observed 10- and 15-year survival from PCBaSe displayed as point estimates with 95% confidence intervals.

Survival is stratified by age at diagnosis, PSA at diagnosis, Gleason score and cancer extent.

https://doi.org/10.1371/journal.pone.0211918.g004

Model validation

Population prostate cancer incidence.

In Fig 5, we compared the age-standardised prostate cancer incidence rates from the simulation with that of Sweden during 1985–2016 which included the introduction of PSA testing. There is evidence for a good fit although the rapid increase in incidence following the introduction of PSA testing was not fully captured. This over-smoothing is possibly due to the PSA uptake sub-model having few degrees of freedom. Importantly, this is a validation and we did not calibrate for prostate cancer incidence rates.

thumbnail
Fig 5. Age-standardised prostate cancer incidence rates from the simulation model compared with those observed in Sweden.

https://doi.org/10.1371/journal.pone.0211918.g005

Screening effect on mortality.

In contrast to the incidence increase from ERSPC, which was used for calibration, the mortality decrease was used for validation. Using simulations of the screened and unscreened arms in ERSPC (see Materials and methods), we estimated the 13-year mortality rate ratio using Poisson regression. The ERSPC reported a mortality RR of 0.79 (95% CI 0.61–0.88), whereas our calibrated model predicted a RR of 0.784 (95% Monte Carlo interval (MCI) 0.781–0.786).

Additional validations.

We provide further validations in section C in S1 Appendix. Fig D in S1 Appendix shows that the model predicts prostate cancer incidence in Sweden and Stockholm for age for the years preceding the introduction of PSA testing. The age-specific model predictions are less accurate for the years preceding 2016, which suggests that the PSA sub-model does not capture the dynamic changes in earlier PSA testing (Fig E, S1 Appendix).

The model was not calibrated for prostate cancer mortality. For the validation by calendar period, see Fig F in S1 Appendix. The predicted shape of the temporal trends is similar to observed rates, although the level for the predicted mortality rates are lower than those observed. Detailed prostate cancer mortality rates by age and calendar period are shown in Fig G in S1 Appendix. The predicted rates are similar to the observed rates. Finally, we show all cause mortality rates by age and calendar period (Fig H, S1 Appendix). The model predictions follow closely the observed patterns in Stockholm and Sweden.

Model predictions

When planning for prostate cancer testing policies, the following measures were considered to represent the burden of disease: prostate cancer incidence rate; prostate cancer overdiagnosis rate, where overdiagnosis is defined as the lifetime risk of having a prostate cancer diagnosis that would never have been clinically detected prior to death due to another cause; prostate cancer mortality rate; and life expectancy. We predicted these measures for a policy that replaces the current testing pattern (see Fig 2) with regular prostate cancer testing during ages 55–69 years: regular testing was introduced from 2015 at age 55 years for those born in 1960 and in later birth cohorts. After 15 years, regular testing had replaced current testing for ages 55–69 years. For each policy 100 million life histories were simulated. We assumed that preferences for PSA testing did not change with the introduction of regular testing, such that only men who would undertake testing under current testing would participate in regular testing [19, 20]. Our modelling of organised screening specifically addresses the effect of screening intensity for the targeted age groups.

In Fig 6 we predicted prostate cancer incidence and overdiagnosis rate ratios for 20 years of 2-yearly testing, 8-yearly testing and the complete cessation of asymptomatic testing in comparison with the current testing pattern. The 2-yearly testing scenario resulted in a small reduction, RR 0.95 (95% MCI 0.95–0.95), in prostate cancer incidence and a larger decrease in prostate cancer overdiagnosis, RR 0.74 (95% MCI 0.73–0.74), over 20 years compared with the current testing pattern. The less intensive 8-yearly testing scenario substantially reduced the prostate cancer incidence, RR 0.86 (95% MCI 0.86–0.86), and the reduction of prostate cancer overdiagnosis was even larger, RR 0.53 (95% MCI 0.53–0.53), compared with the current testing pattern. The hypothetical cessation of all PSA testing for asymptomatic men in 2015 would result in a substantial decrease, RR 0.72 (95% MCI 0.72–0.73), of prostate cancer incidence compared with the current testing rates over 20 years as well as no overdiagnosis of asymptomatic men.

thumbnail
Fig 6. Predicting incidence and overdiagnosis rate ratios for 2-yearly and 8-yearly screening between 55 and 69 years of age and the cessation of asymptomatic testing compared with current testing uptake.

The changes in testing policy were introduced in 2015 for a population reflecting the Swedish age-structure.

https://doi.org/10.1371/journal.pone.0211918.g006

The purpose of early detection for prostate cancer is to lower prostate cancer mortality and increase the life expectancy. To assess these effects, we predicted mortality rates and life-years gained for the different PSA testing policies (Fig 7). Both the mortality rates and the life-years gained were expressed relative to the current PSA testing pattern. We predict that the broad introduction across the 1946–1965 birth cohorts contributed to the mortality reduction, which, while wearing off towards the end of the 20 year period, causes an increase in mortality, particularly for the 8 yearly testing. The relative effect on the mortality is considerably smaller than the effect on the incidence and while the 2 yearly testing pattern has a similar mortality, RR 0.99 (95% MCI 0.99–1.00), the 8 yearly testing pattern slightly increases the mortality, RR 1.02 (95% MCI 1.01–1.02), as the current uptake pattern for the predicted 20 years. Similarly the 2 yearly testing pattern slightly increased the life expectancy, 0.01 (95% MCI 0.01–0.02) life-years gained per 1,000 persons, and the 8 yearly testing pattern did not noticeably affect the life expectancy, -0.01 (95% MCI -0.01–-0.00) life-years gained per 1,000 persons, compared to the current uptake pattern for the predicted 20 years. The hypothetical scenario of cessation of PSA testing for asymptomatic men in 2015 was predicted to significantly increase prostate cancer mortality over 20 years, RR 1.07 (95% MCI 1.07–1.08), and reduce the life expectancy by -0.06 (95% MCI -0.08–-0.04) per 1000 persons. The effect is smaller than the 20% prostate cancer mortality reduction observed in the ERSPC study as the current PSA uptake pattern is less intensive than ERSPC and the lower biopsy compliance observed in Sweden (see Table B in S1 Appendix).

thumbnail
Fig 7. Predicting mortality RRs and life-years gained for 2-yearly and 8-yearly screening between 55 and 69 years of age and the cessation of asymptomatic PSA testing compared with current testing patterns.

The shifts in testing policy was introduced 2015 on a population reflecting the Swedish age-structure.

https://doi.org/10.1371/journal.pone.0211918.g007

The modest mortality reductions are potentially explained by relatively high levels of testing under the current PSA testing, and the use of the currently observed biopsy compliance for all predicted scenarios. These reductions are also comparable to the non-significant mortality reduction found in the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial [8], where there were high levels of PSA testing in the control arm [21].

Discussion

Our aim was to develop, calibrate and validate a prostate cancer natural history model that could be used to evaluate prostate cancer testing. Using extensive Swedish data resources, we extended an older US-calibrated prostate cancer natural history model for the Swedish population and validated the new model. We then used the revised model to predict longer-term patterns of prostate cancer incidence and mortality in Sweden.

One of the challenges with natural history models is finding a model that is biologically meaningful and representative, whilst being mathematically simple and potentially estimable. Another challenge is that many of the parameters of a cancer’s natural history are either not observable, such as the initial onset of disease, or are only partially observed at specific time points, such as the size of a tumour at the time of diagnosis.

Investigators are divided in how to resolve these challenges. One school uses very simple models with expert judgement for the effectiveness of interventions. The validity of the predictions depend on the accuracy of the experts. A second school uses Markov models fitted to evidence from randomised controlled trials (RCTs) to assess the effectiveness of specific interventions within the follow-up from the RCTs. The validity is limited by the available RCT evidence, with strong limitations for predicting outside of the observed data. A third school uses more detailed natural history models and simulate for individuals. The validity of the predictions primarily reflects the validity of the natural history model. We are firmly in the last of these three schools. We have previously modelled cancer screening using both simple and more complex Markov models, and found issues with validity for the simple models and issues with model complexity for scaling more detailed Markov models to combinations of natural history and test states by time in state [22].

One potential criticism of many microsimulation models for cancer screening is that their complexity is coupled with a lack of model detail and that the source is usually closed. The US-funded CISNET collaboration has provided detailed model documentation [23] and some models (e.g. FHCRC) are available on request. We addressed this criticism by making all of our code open source and easily available (https://github.com/mclements/microsimulation and https://github.com/mclements/prostata). We encourage other microsimulation modellers to make their code openly available, which will lower the entry requirements for other investigators. If the cost of entry remains high, then a closed source consulting model will continue to be predominant.

There are several potential limitations. First, the revised natural history model was less accurate for modelling age dependent incidence (e.g. the incidence was under estimated in the younger ages and overestimates in the older ages; see Fig E in S1 Appendix). We suggest caution when interpreting incidence for Nordic populations for several reasons: the higher Nordic rates may lead to greater absolute declines in rates, leading to more effective screening; and the point estimate for the mortality reduction due to screening was higher in the Göteborg site, although there was no statistically significant heterogeneity between the ERSPC sites [7, p = 0.4]. More accurate modelling at older ages would require a more detailed natural history model. Second, it is difficult to assess whether the natural history model is causal and accurate: the disease process is only partially observed and the biology represented using a simple mathematical representation. Third, the prediction of age-standardised mortality rates were slightly lower than that observed in the Swedish population. This underestimation could be due to e.g. changes in Gleason grading, where the Gleason and T-stage distribution at diagnosis was based on data from patients diagnosed 2011–2013 whereas the survival by stage was based on patients diagnosed 1998–2014.

Finally, as individuals were followed for up to 15 years, the survival calibration was influenced by earlier Gleason grading practices, possibly leading to an overestimation of the risks for Gleason ≤ 6 cancers. Nonetheless, we expect that our predictions will have strong internal validity, as the simulations allow for carefully controlled experimental conditions.

Strengths of our approach include the wealth of detailed longitudinal data available from Sweden, and that we have made the model open source. Our natural history model can support an evidence-based approach to assessing whether the introduction of organised re-testing or screening would be effective and cost-effective. The Stockholm Prostata model was branched from the FHCRC prostate cancer model in 2013. Since that time, both the Stockholm Prostata model and the FHCRC model have incorporated a number of similar extensions, including T-stage development and more detailed modelling of Gleason grading [5]. A key difference is that the updated FHCRC model includes a two-parameter model for cancer onset [24]. We are currently investigating whether to incorporate these extensions into the Stockholm Prostata model. A key advantage of the Stockholm Prostata model is the availability of detailed longitudinal data on PSA values and prostate biopsies linked with clinical outcomes. The US CISNET prostate cancer models have historically relied heavily on un-linked, cross-sectional SEER data. In contrast the Swedish registry data have high coverage and are well suited for modelling the disease progression and treatment pathways within men. Our choice of modelling approach included model calibration for some key parameters in both unscreened and screened populations (Table 1). To assess whether the adapted model was valid for Sweden, we compared the model predictions with observed population incidence. This approach demonstrates both the strengths and potential weaknesses of our model.

Our model is now well suited to the health economic evaluation of new prostate cancer screening tests. In particular, we have modelled for Gleason ≤ 6 cancers, which typically have very good prognosis, from Gleason 7 or Gleason ≥ 8 cancers, where the last category has particularly poor prognosis. The new prostate cancer tests have focused on maintaining sensitivity for more aggressive prostate cancers, such as Gleason 7 or higher, with reductions in the incidence of small Gleason ≤ 6 cancers and negative biopsies. Following a reviewer’s suggestion, we will further investigate how to model for preferences for prostate cancer testing (see also [19]). Those preferences are expected to affect the effectiveness of new prostate cancer screening interventions in populations with established PSA testing patterns.

From the section on Model predictions, we found evidence to suggest that organised screening would reduce overdiagnosis without increasing mortality compared to current screening practices. Future work is needed to investigate refined screening strategies and the evaluation of cost-effectiveness.

Materials and methods

In this section, we will describe the various data sources used to develop the model, explain the model formulation, outline the methods for the calibration and validation of the model, and finish with a description of the model implementation.

Data sources

We have integrated multiple sources of data in order to extract relevant Swedish prostate cancer statistics for our model. The linkage between the different sources is illustrated in Fig 8. The regional ethics committee in Stockholm approved the study (dnr 2012/438-31/3, dnr 2016/620-32) and the data were analyzed anonymously. Detailed individual data on men who had a PSA test or a prostate core biopsy were extracted from the Stockholm PSA and Biopsy Register. Using the unique Swedish personal identification number [25], we linked the study cohort to a number of population registers, including the National Cancer Register (NCR) and the National Prostate Cancer Register (NPCR). The NCR included data on tumour extent (loco-regional vs distant) and the date of diagnosis; the NPCR contained additional clinical information, including Gleason score and TNM cancer stage classification. The NCR and NPCR were known to cover 96.3% and 94% of cancer patients, respectively [18, 26]. A dynamic cohort (with men moving in and out of the Stockholm County) was defined via the Population Register which contains information on migrations within Sweden as well as external migration.

From PCBaSe, a research database linking the NPCR with other health and population registers, we extracted survival following a prostate cancer diagnosis by calendar period, age, stage, PSA value and Gleason score; and treatment modality by calendar period, age and Gleason score.

Data on prostate cancer incidence by calendar year, age, stage and Gleason scores were extracted from the NCR and the NPCR. Prostate cancer mortality by calendar year and age were obtained from the Cause of Death Register. The Total Population Register was used to calculate the male population at risk by calendar period and age.

Model description

As noted in the Results, we re-implemented and extended an earlier model by Gulati et al [12]. The prostate cancer natural history model links PSA growth with prostate cancer progression. The main components of the model are: (i) longitudinal PSA growth; (ii) Gleason score distribution at onset; (iii) transitions between natural history disease states, as shown in Fig 1; (iv) PSA testing; (v) biopsy sensitivity and specificity; (vi) treatment; and (vii) survival. The PSA growth is expressed functionally in Eq (1) and the transitions between states is defined in Eqs (5)(11). Cancer onset is assumed to be independent of PSA, and that PSA rises faster after cancer onset. The distribution of Gleason score is assumed to be multinomial with the proportions modelled as a function of age at cancer onset, with no de-differentiation (or change in Gleason score) after onset. We expand on these difference components in the following sub-sections.

Changes from the earlier model.

In the absence of Swedish data, we did not update a number of model components from the 2013 model, including: most parameters and the general structure for the longitudinal PSA sub-model; prostate cancer onset; the mean time from onset to metastatic cancer; the rates of clinical detection; the shape of the hazards from prostate cancer diagnosis to death; and the hazard ratios due to prostate cancer treatment. We updated: the population structure and background mortality rates; longitudinal PSA sub-models for Gleason 6 and Gleason 7 cancers; a detailed PSA testing sub-model; biopsy compliance; management of negative biopsies; new states for undiagnosed cancer by T-stage, modelling for progression; the distribution of Gleason score at cancer onset; treatment, including subsequent treatment for men initially assigned to active surveillance; and detailed survival by stage, Gleason score, PSA values and age.

Longitudinal PSA sub-model.

The change in PSA values after cancer onset is assumed to differ by Gleason score, with a specific change for Gleason score 6 and below (G6-), Gleason 7 (G7) and Gleason 8 and above (G8+). (1) where yi(t) is measured PSA at age t for subject i, I(A) is a 1 if A is true and 0 otherwise, toi is the age of cancer onset and where

  • , is a random intercept
  • , which is a random slope
  • , is a Gleason-specific random change of slope after cancer onset
  • ϵi(t)∼N(0, ϕ2), represents measurement error.

The random slopes βTN(μ, σ2) are truncated normal distributions with support for positive real values. The probability density functions are f(β) = ϕ((βμ)/σ)/(1 − Φ(−μ/σ)), where ϕ and Φ are the probability density function and cumulative distribution functions for standard normal distributions, respectively. The truncated normal distributions ensure that PSA growth is monotonically increasing.

The values for μ1, σ1, μ4 and σ4 were from [11], while the estimates for μ2, σ2, μ3 and σ3 were weighted sums to separate the estimates of Gleason ≤ 7 from [11] into Gleason ≤ 6 and Gleason 7.

Gleason score distribution.

The Gleason score assigned to an individual at cancer onset is dependent on the age at cancer onset according to the probabilities modelled via the multinomial logistic regression in Eqs (2)(4) as illustrated in Fig 9 (right panel). There is evidence that the FHCRC and Swedish models assume different Gleason score distributions, which may reflect differences in Gleason grading between the US and Swedish populations or changes in Gleason grading over time, where the Swedish data are more recent. (2) (3) (4) where t ≥ 35.

thumbnail
Fig 9. Comparing the modelled proportion of Gleason scores at cancer onset from FHCRC model in 2013 and 2018 with the Stockholm Prostata model.

https://doi.org/10.1371/journal.pone.0211918.g009

Transition rates between natural history states.

Transitions between different states in the model (i.e. healthy, localized states, metastatic states and death) are simulated via events, which occur with different rates.

The disease onset (a transition from the healthy to a localized state) is modelled via a time-dependent hazard (from age 35) as (5) which means that the time-to-event follows a Weibull distribution (shape parameter 2 and scale parameter ). The cumulative distribution (the complement of the survival function) for the time to cancer onset is hence [11]. The event density, which is simply the probability density for the Weibull distribution with parameters as above, represents the rate of cancer onset per unit time (see Fig A in S1 Appendix).

Transitions between disease states are dependent on age (t) and the individual log-PSA-values (). The model includes T-stage transitions within localised cancer states for preclinical cancer. The transition from T1–T2 to T3–T4 is the same for all Gleason categories and is described in Eq (6). γt is the hazard of transitioning to T3–T4 and the time-dependence comes from the log-PSA levels. (6)

The rate from T3–T4 to metastatic disease is proportional to PSA and γm which is the metastasis hazard (see Eq (7)). Note that the FHCRC model used γt to represent the parameter for the transition rate from onset to metastatic [11]. (7)

The clinical diagnosis rate for localised cancer onset for Gleason score 7 and lower (Eq (8)) and Gleason score 8 and higher (Eq (9)) are proportional to PSA and , which is the clinical diagnosis hazard for localised cancer for the two Gleason score categories. As per the older US model, we combined the Gleason ≤ 6 and 7 scores for these transitions due to a lack of informative data. (8) (9)

The rate to clinical diagnosis after metastatic onset for Gleason score 7 and below (Eq (10)) and Gleason score eight and above (Eq (11)), is proportional to PSA and is the post-metastasis clinical diagnosis hazard for the two Gleason score categories. (10) (11)

PSA testing.

Diffusion of a new health technology into a population is a dynamic process. This process may reach a stationary state after a longer period of time. For PSA testing, test uptake was distributed across a range of ages over a comparatively short period, such that the PSA test patterns varied substantially by birth cohorts. PSA test uptake is required for calibrating the model to screened populations.

The natural history model is calibrated to data that are observed both before and since the introduction of PSA testing. In particular, we have survival data for men diagnosed for prostate cancer from 1998, which is after the introduction of PSA testing. This requires that we accurately model for PSA uptake and re-testing and for treatment to represent the men at risk for prostate cancer incidence, survival and mortality.

The PSA sub-model represents uptake of the PSA test together with the pattern of PSA re-testing. Uptake was modelled as: (i) a function of age for cohorts born from 1960; (ii) a function of calendar period multiplied by a factor for birth cohort for birth cohorts born before 1932; and (iii) a mixture of (i) and (ii) for the birth cohorts between 1932 and 1960. Mathematically, age-specific uptake (i) is modelled by the cumulative density function for a log-logistic cure model, such that (12) where t is age at uptake, π1 is the proportion of men ever having a PSA test, c is the calendar year of birth (or birth cohort), and where a1 and b1 are the shape and scale for a log-logistic distribution for those men who ever have a PSA test. The calendar-specific uptake for the older cohorts (ii) is modelled by (13) where π2 is the proportion of men who ever have a PSA test, and where a2 and b2 are the shape and scale for a log-logistic distribution for those men who ever have a PSA test. Finally, for the intermediate birth cohorts, t1 is sampled from F1, t2 is sampled from F2, and t1 is selected over t2 with probability (1960 − c)/(1960 − 1932).

PSA re-testing is modelled using a Weibull cure model, such that (14) where t0 is the age at the previous PSA test, y0 is the value of the previous PSA test, π3 is the proportion of men who will ever have a re-test, and where a3 and b3 are the shape and scale parameters of the Weibull distribution for those who ever have a re-test.

For re-testing, the parameters π, a3 and b3 were estimated using a Weibull cure model stratified by the five-year age groups (30–34, 35–39, …, 85–89, 90+) and by PSA values ([0, 1), [1, 3), [3, 10), [10, ∞)) at the previous PSA test. The parameters π1, π2, a1, a2, b1 and b2 were calibrated to observed PSA test rates for Stockholm using a Poisson likelihood.

Biopsy sensitivity and compliance.

For men who had a PSA value above 3 ng/mL, the proportion complying with a subsequent biopsy varied by PSA values and age, and was estimated from the SPBR (see Table B in S1 Appendix). We also modelled for whether a prostate cancer was biopsy-detectable, assuming that a cancer was not initially detectable for a proportion ϕlag (16) and (17) of the time from cancer onset to the development of a T3–T4 cancer. Our approach varied from Wever et al. 2010 [16], who modelled for the sensitivity of a PSA test to detect a cancer by stage, irrespective of the time from cancer onset. The probability of a biopsy (Bx) rendering a diagnosis (Dx) depends on the biopsy sensitivity, the biopsy compliance and the probability of cancer: (15) where P(t ≥ t0) is the probability of having had a cancer onset, P(Bxcomp(t, PSA|PSA+)) is probability of performing a biopsy after a positive PSA test depending on age and PSA value and P(Bxsens) is the biopsy sensitivity as expressed below: (16) (17) where Δ = tT3−T4t0 is the time with a T1–T2 cancer.

Treatment sub-model.

Probabilities for treatment assignment to either active surveillance, radical prostatectomy, radiotherapy or androgen deprivation therapy were assessed from the SBPR. These values were stratified by five year age groups and Gleason score (see Fig C in S1 Appendix).

Survival sub-model.

Survival from cancer diagnosis to death due to screening was calibrated to the NPCR. In summary, we first simulated survival for Sweden for 1998–2014 including screening uptake. The shape for the uncalibrated hazard function used data from the SEER database for localised and metastatic diagnoses. NPCR survival estimates were available at ten and fifteen years after diagnosis by age groups for the period 1998–2014; for non-metastatic cancer, survival was available by Gleason score and for PSA less than 10 ng/mL and for 10 ng/mL and over. We compared the Kaplan-Meier estimates of survival from diagnosis from the simulated data with observed Kaplan-Meier estimates for men diagnosed with prostate cancer in Sweden. For each group, we calculated a calibration hazard ratio from the log of observed survival from NPCR divided by the log of simulated survival. We did not calibrate to observed survival from the pre-PSA era, as such estimates were not available.

One significant modelling challenge is selecting and fitting a mathematical representation for the effect of cancer screening. For cancers with a short period between a screen-detected diagnosis and a counter-factual clinical diagnosis, a common model is to represent differential survival based on changes in stage at diagnosis and changes in treatment. For prostate cancer, there is potentially a long period between screen-detectable prostate cancer and clinical diagnosis. The lead-time between a screen-detected diagnosis and clinical diagnosis is of the order of 10 years [27, 28]. We represent the effect of screening on survival S as a function of time t = aac, where ac is the possibly counter-factual age of clinical diagnosis, rather than the age of screen-detected diagnosis as. Then (18) (19) where ClinicalDx and ScreenDx represents either a clinical diagnosis or a screen-detected diagnosis, respectively, and Treatment(a), Stage(a), and PSA(a) are the treatment modality, stage and PSA value at age a, respectively. The treatment sub-model assumes that the hazard ratio from SPCG-4 [29, 0.56] applies comparing both radical prostatectomy and radiation therapy with either watchful waiting or active surveillance. This point estimate is consistent with the point estimate from the PIVOT trial [30], albeit without the latter being significantly different from one. We then model survival as (20)

Implementation of the Stockholm Prostata model

The FHCRC prostate cancer model was implemented in C under an open source GPL licence, although the code has not been widely distributed. We have implemented our extended model using existing C++ simulation libraries and to manage input parameters and output predictions using R. The model is implemented together with an extensible microsimulation framework. It is available from https://github.com/mclements/microsimulation and https://github.com/mclements/prostata under a GPL3 license, allowing for use and reuse, in contrast to most existing microsimulation models which are not open source [31].

Model fitting and calibration

To adapt the extended prostate cancer model to the Swedish context, a number of input parameters were estimated from external data and a smaller set of parameters were estimated using simulation predictions fitted to calibration targets. First, a set of parameters were estimated from available data sources, including parameters describing the longitudinal development of PSA with age, the rate controlling time to cancer onset, and transition between cancer states (Table 1, external inputs). Observed characteristics of the modelled population were collected and used to estimate another set of parameters via simulations of the model. The characteristics used as calibration targets were the distribution of incident prostate cancers by Gleason scores and T-stage, observed survival by age groups and disease stage, together with the PSA uptake and re-testing (Table 1, directly observed inputs). The PSA test uptake prior to 2003 was reconstructed by fitting a model to prostate cancer incidence using later PSA test rates as covariates, and we used survival analysis to estimate re-testing rates prior to cancer diagnosis by age and PSA values. The calibration targets are described in further detail under the sub-section on Calibration methods below. The model was validated against the population of Sweden and the population of Stockholm [32] (Table 1, validation targets). For the validation, we simulated the observed PSA testing pattern and validated the model against the population data for incidence, all-cause mortality and prostate cancer mortality (see Results and section C in S1 Appendix).

Emulating the ERSPC trial.

We performed a simulation experiment to emulate the ERSPC trial, where we predicted both the “control” arm and the “screening” arm with 100 million simulated men. Both arms where constructed as flat populations with inclusion between ages 55–69 years after which they where followed for 13 years. For study eligibility, we assumed that the men had not had a prostate cancer diagnosis prior to age 55 years. For the control arm, we assumed no screening. For the screening arm, we assumed four-yearly screening between ages 55 and 69 years. The PSA threshold was assumed to be 3.0 ng/mL, although in fact this varied by study site. We also used the reported biopsy compliance of 85.6%. Treatment and other-cause deaths were assumed to be similar to those observed in Stockholm.

Calibration methods.

We used four sets of targets for our calibration procedure (with associated parameters):

  1. The relative distribution of Gleason and T-stage for incident prostate cancers in contemporary Sweden (to estimate β7, β8, γt, γm, ϕlag)
  2. An equality constraint on the mean time from onset to metastatic cancer (to estimate γt, γm)
  3. The incidence rate ratio due to screening from the ERSPC (to estimate γt, γm, ϕlag)
  4. Detailed prostate cancer survival for contemporary Sweden.

For the first step, we calibrated for the incidence-related targets 1, 2 and 3 in one likelihood; and then, as a second step, we calibrated for target 4. For targets 1, 2 and 4, we simulated for current PSA testing in Sweden; for target 3, we simulated for both arms of the ERSPC.

For target 1, we used a multinomial likelihood with unknown parameters θ = (β7, β8, γt, γm, ϕlag)′. The multinomial log-likelihood was defined as (21) where i is an index over age, j is an index over cancer staging, ni is the observed total count in a particular age group, yij is the observed count for a combination of age and cancer staging, mi(θ) is the simulated total count, and pij(θ) is the simulated proportion of individuals in a particular disease state (see Eqs (2)(4) for the multinomial data generating mechanism). The cancer staging for the observed frequencies and simulated proportions were by age and (i) loco-regional cancers by combinations of Gleason score and T-stage, and (ii) metastatic prostate cancers. The intercept terms α7 and α8 for the distribution of Gleason score at age 35 years were not identifiable and we assumed that α7 = log(0.2) and α8 = log(0.002). Half-cell corrections were performed to handle empty cells in the simulated proportions.

For target 2, we used a non-linear equality constraint on the expected time from onset to metastatic cancer to ensure identifiability of progression across T-stages. Formally, (22) where are the mean simulated transition times from onset to T3–T4, are the mean simulated transition times from T3–T4 to metastatic cancer, and is the expected mean time from onset to metastatic cancer from a model without separate T stages (25.9 years from the FHCRC model; [12]).

For target 3, we used a non-linear equality constraint on the simulated incidence rate ratio from the ERSPC, where (23) where IRR is the observed PSA screening incidence rate ratio from the ERSPC study and IRR(θ) is the simulated incidence rate ratio for the emulation of the ERSPC study.

Formally, the log-likelihood l123(θ) for targets 1–3 was (24) where w2 and w3 are weights for the non-linear constraints. Note that the equality constraints in targets 2 and 3 were formulated in terms of weighted quadratic penalties. The weights were selected so that the constraints were approximately satisfied (w2 = 1; w3 = 104).

To optimise the simulation log-likelihood l123(θ), we used the Nelder-Mead optimisation algorithm. For each iteration of the optimisation, we evaluated the log-likelihood by simulating three different scenarios that depended on the parameters θ. From these scenarios, we predicted values that were used in the log-likelihood, including the relative distribution of cancer staging, the mean time from onset to metastatic cancer, and the PSA screening incidence rate ratio for the reconstructed ERSPC trial. The Nelder-Mead algorithm is commonly used to optimise functions for which derivatives are difficult to calculate and for objectives that are not smooth. The standard errors were calculated from the inverse of the Hessian matrix for the negative log-likelihood (see Table 2). Given the simulation likelihood, the calculation of the Hessian matrix required that the step size for the finite differences used a larger step size (0.01).

For the second step and target 4, the distributions of Gleason score, T-stage and metastatic cancer (θ) were kept fixed. Using the mean between the observed 10- and 15-year survival as the calibration target and Kaplan-Meier estimates based on the model simulations, we calculated the hazard ratios by age group, cancer stage, Gleason score and PSA values. The adjustment was calculated by averaging on the log hazard ratio scale (25) where is the simulated survival to time t based on the parameters from step 1 and S(t|Age, Gleason, PSA, metastatic) is observed survival to time t from the NPCR.

Validation of the ERSPC mortality rate ratio.

To validate against the ERSPC screening mortality RR of 0.79 (95% CI 0.61–0.88), we performed a simulation experiment to emulate the ERSPC trial (see Emulating the ERSPC trial). The mortality hazard ratio comparing the screening arm with the control arm was estimated using Poisson regression taking account of the number of prostate cancer deaths and the person-time by one-year age groups. Our validation predictions resulted in a mortality RR of 0.784 (95% Monte Carlo interval (MCI) 0.781–0.786).

Supporting information

S1 Appendix. The appendix contains further detail on the model and the model inputs.

It also holds a comparison of survival from diagnosis by Gleason with the FHCRC model. Finally, it also includes further validation of the model.

https://doi.org/10.1371/journal.pone.0211918.s001

(PDF)

Acknowledgments

We are grateful to Pär Stattin and Fredrik Sandin for providing data from PCBaSe, and for data management of the Stockholm PSA and Prostate Biopsy Register by Pouran Almstedt and the late Peter Olausson. We thank the following for discussions: Erwin Laure, Jan Adolfsson, Markus Aly, Tobias Nordström and Þorgerður Pálsdóttir.

References

  1. 1. Tsodikov A, Gulati R, Heijnsdijk EAM, Pinsky PF, Moss SM, Qiu S, et al. Reconciling the Effects of Screening on Prostate Cancer Mortality in the ERSPC and PLCO Trials. Ann Intern Med. 2017;167(7):449–455. pmid:28869989
  2. 2. de Koning HJ, Gulati R, Moss SM, Hugosson J, Pinsky PF, Berg CD, et al. The efficacy of prostate-specific antigen screening: Impact of key components in the ERSPC and PLCO trials. Cancer. 2018;124(6):1197–1206. pmid:29211316
  3. 3. Heijnsdijk EaM, de Carvalho TM, Auvinen A, Zappa M, Nelen V, Kwiatkowski M, et al. Cost-effectiveness of prostate cancer screening: a simulation study based on ERSPC data. Journal of the National Cancer Institute. 2015;107(1):366. pmid:25505238
  4. 4. Pataky R, Gulati R, Etzioni R, Black P, Chi KN, Coldman AJ, et al. Is prostate cancer screening cost-effective? A microsimulation model of prostate-specific antigen-based screening for British Columbia, Canada. Int J Cancer. 2014;135(4):939–947. pmid:24443367
  5. 5. Roth JA, Gulati R, Gore JL, Cooperberg MR, Etzioni R. Economic analysis of prostate-specific antigen screening and selective treatment strategies. JAMA Oncology. 2016;2(7):890–898. pmid:27010943
  6. 6. Engholm G, Ferlay J, Christensen N, Bray F, Gjerstorff ML, Klint A, et al. NORDCAN–a Nordic tool for cancer information, planning, quality control and research. Acta Oncol. 2010;49(5):725–736. pmid:20491528
  7. 7. Schröder FH, Hugosson J, Roobol MJ, Tammela TLJ, Zappa M, Nelen V, et al. Screening and prostate cancer mortality: results of the European Randomised Study of Screening for Prostate Cancer (ERSPC) at 13 years of follow-up. Lancet. 2014;384(9959):2027–2035. pmid:25108889
  8. 8. Andriole GL, Crawford ED, Grubb RL 3rd, Buys SS, Chia D, Church TR, et al. Prostate cancer screening in the randomized Prostate, Lung, Colorectal, and Ovarian Cancer Screening Trial: mortality results after 13 years of follow-up. J Natl Cancer Inst. 2012;104(2):125–132. pmid:22228146
  9. 9. Martin RM, Donovan JL, Turner EL, Metcalfe C, Young GJ, Walsh EI, et al. Effect of a low-intensity PSA-based screening intervention on prostate cancer mortality: the CAP randomized clinical trial. JAMA. 2018;319(9):883–895. pmid:29509864
  10. 10. Etzioni R, Gulati R, Falcon S, Penson DF. Impact of PSA screening on the incidence of advanced stage prostate cancer in the United States: a surveillance modeling approach. Med Decis Making. 2008;28:323–31. pmid:18319508
  11. 11. Gulati R, Inoue L, Katcher J, Hazelton W, Etzioni R. Calibrating disease progression models using population data: a critical precursor to policy development in cancer control. Biostatistics. 2010;11(4):707–719. pmid:20530126
  12. 12. Gulati R, Gore JL, Etzioni R. Comparative effectiveness of alternative prostate-specific antigen–based prostate cancer screening strategies: model estimates of potential benefits and harms. Annals of Internal Medicine. 2013;158(3):145–153. pmid:23381039
  13. 13. Thompson IM, Goodman PJ, Tangen CM, Lucia MS, Miller GJ, Ford LG, et al. The influence of finasteride on the development of prostate cancer. N Engl J Med. 2003;349:215–24. pmid:12824459
  14. 14. Mariotto AB, Etzioni R, Krapcho M, Feuer EJ. Reconstructing PSA testing patterns between black and white men in the US from Medicare claims and the National Health Interview Survey. Cancer. 2007;109(9):1877–86. pmid:17372918
  15. 15. Nordström T, Aly M, Clements MS, Weibull CE, Adolfsson J, Grönberg H. Prostate-specific antigen (PSA) testing is prevalent and increasing in Stockholm County, Sweden, Despite no recommendations for PSA screening: results from a population-based study, 2003-2011. Eur Urol. 2013;63(3):419–425. pmid:23083803
  16. 16. Wever EM, Draisma G, Heijnsdijk EAM, Roobol MJ, Boer R, Otto SJ, et al. Prostate-specific antigen screening in the United States vs in the European Randomized Study of Screening for Prostate Cancer–Rotterdam. JNCI: Journal of the National Cancer Institute. 2010;102(5):352–355. pmid:20142584
  17. 17. Hagel E, Garmo H, Bill-Axelson A, Bratt O, Johansson JE, Adolfsson J, et al. PCBaSe Sweden: A register-based resource for prostate cancer research. Scand J Urol Nephrol. 2009; p. 1–8.
  18. 18. Van Hemelrijck M, Wigertz A, Sandin F, Garmo H, Hellstrom K, Fransson P, et al. Cohort Profile: the National Prostate Cancer Register of Sweden and Prostate Cancer data Base Sweden 2.0. Int J Epidemiol. 2013;42(4):956–967. pmid:22561842
  19. 19. Krahn M, Naglie G. The next step in guideline development: incorporating patient preferences. JAMA. 2008;300(4):436–438. pmid:18647988
  20. 20. Koitsalu M, Eklund M, Adolfsson J, Sprangers MAG, Grönberg H, Brandberg Y. Predictors of participation in risk-based prostate cancer screening. PloS One. 2018;13(7):e0200409. pmid:29990335
  21. 21. Gulati R, Tsodikov A, Wever EM, Mariotto AB, Heijnsdijk EAM, Katcher J, et al. The impact of PLCO control arm contamination on perceived PSA screening efficacy. Cancer Causes & Control: CCC. 2012;23(6):827–835.
  22. 22. Creighton P, Lew JB, Clements M, Smith M, Howard K, Dyer S, et al. Cervical cancer screening in Australia: modelled evaluation of the impact of changing the recommended interval from two to three years. BMC Public Health. 2010;10:734. pmid:21110881
  23. 23. CISNET. Prostate Cancer Model Profiles; 2018. Available from: http://cisnet.cancer.gov/prostate/profiles.html.
  24. 24. Gulati R, Tsodikov A, Etzioni R, Hunter-Merrill RA, Gore JL, Mariotto AB, et al. Expected population impacts of discontinued prostate-specific antigen screening. Cancer. 2014;120(22):3519–3526. pmid:25065910
  25. 25. Ludvigsson JF, Almqvist C, Bonamy AK, Ljung R, Michaelsson K, Neovius M, et al. Registers of the Swedish total population and their use in medical research. Eur J Epidemiol. 2016;31(2):125–136. pmid:26769609
  26. 26. Barlow L, Westergren K, Holmberg L, Talbäck M. The completeness of the Swedish Cancer Register—a sample survey for year 1998. Acta Oncologica. 2009;48(1):27–33. pmid:18767000
  27. 27. Draisma G, Boer R, Otto SJ, van der Cruijsen IW, Damhuis RA, Schroder FH, et al. Lead times and overdetection due to prostate-specific antigen screening: estimates from the European Randomized Study of Screening for Prostate Cancer. J Natl Cancer Inst. 2003;95:868–78. pmid:12813170
  28. 28. Draisma G, Etzioni R, Tsodikov A, Mariotto A, Wever E, Gulati R, et al. Lead time and overdiagnosis in prostate-specific antigen screening: importance of methods and context. J Natl Cancer Inst. 2009;101:374–83. pmid:19276453
  29. 29. Bill-Axelson A, Holmberg L, Garmo H, Rider JR, Taari K, Busch C, et al. Radical prostatectomy or watchful waiting in early prostate cancer. The New England Journal of Medicine. 2014;370(10):932–942. pmid:24597866
  30. 30. Wilt TJ, Jones KM, Barry MJ, Andriole GL, Culkin D, Wheeler T, et al. Follow-up of prostatectomy versus observation for early prostate cancer. N Engl J Med. 2017;377(2):132–142. pmid:28700844
  31. 31. Karlsson A, Olofsson N, Laure E, Clements M. A parallel microsimulation package for modelling cancer screening policies. In: 2016 IEEE 12th International Conference on e-Science (e-Science); 2016. p. 323–330.
  32. 32. Socialstyrelsen. Statistikdatabas för cancer; 2014. Available from: http://www.socialstyrelsen.se/statistik/statistikdatabas/cancer.