Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Factors Associated with Participation, Active Refusals and Reasons for Not Taking Part in a Mortality Followback Survey Evaluating End-of-Life Care

  • Natalia Calanzani ,

    natalia.calanzani@ed.ac.uk

    Affiliations King’s College London, Cicely Saunders Institute, Department of Palliative Care, Policy & Rehabilitation, London, United Kingdom, University of Edinburgh, The Usher Institute of Population Health Sciences and Informatics, Centre for Population Health Sciences, Medical School, Edinburgh, United Kingdom

  • Irene J Higginson,

    Affiliation King’s College London, Cicely Saunders Institute, Department of Palliative Care, Policy & Rehabilitation, London, United Kingdom

  • Jonathan Koffman,

    Affiliation King’s College London, Cicely Saunders Institute, Department of Palliative Care, Policy & Rehabilitation, London, United Kingdom

  • Barbara Gomes

    Affiliation King’s College London, Cicely Saunders Institute, Department of Palliative Care, Policy & Rehabilitation, London, United Kingdom

Abstract

Background

Examination of factors independently associated with participation in mortality followback surveys is rare, even though these surveys are frequently used to evaluate end-of-life care. We aimed to identify factors associated with 1) participation versus non-participation and 2) provision of an active refusal versus a silent refusal; and systematically examine reasons for refusal in a population-based mortality followback survey.

Methods

Postal survey about the end-of-life care received by 1516 people who died from cancer (aged ≥18), identified through death registrations in London, England (response rate 39.3%). The informant of death (a relative in 95.3% of cases) was contacted 4–10 months after the patient died. We used multivariate logistic regression to identify factors associated with participation/active refusals and content analysis to examine refusal reasons provided by 205 nonparticipants.

Findings

The odds of partaking were higher for patients aged 90+ (AOR 3.48, 95%CI: 1.52–8.00, ref: 20–49yrs) and female informants (AOR 1.70, 95%CI: 1.33–2.16). Odds were lower for hospital deaths (AOR 0.62, 95%CI: 0.46–0.84, ref: home) and proxies other than spouses/partners (AORs 0.28 to 0.57). Proxies of patients born overseas were less likely to provide an active refusal (AOR 0.49; 95% CI: 0.32–0.77). Refusal reasons were often multidimensional, most commonly study-related (36.0%), proxy-related and grief-related (25.1% each). One limitation of this analysis is the large number of nonparticipants who did not provide reasons for refusal (715/920).

Conclusions

Our survey better reached proxies of older patients while those dying in hospitals were underrepresented. Proxy characteristics played a role, with higher participation from women and spouses/partners. More information is needed about the care received by underrepresented groups. Study design improvements may guide future questionnaire development and help develop strategies to increase response rates.

Introduction

Mortality followback surveys with bereaved relatives are used in several countries such as US [1], the UK [2], Japan [3] and Italy [4] to evaluate end-of-life care, but the method faces ethical and methodological challenges [5]. From an ethical perspective, sensitive planning is required to avoid distress to a population that can be vulnerable [68]. Researchers should make sure they maximise research benefits, while minimising/not causing harm to participants [810]. From a methodological perspective, a core concern is how to increase participation to avoid low response rates (RRs).

Low RRs can result in nonresponse bias when there are systematic differences between participants and nonparticipants and these are correlated with the variables of interest in a study [1113]. Methods to increase the odds of response in surveys have been widely investigated [12,1419]. However, nonparticipation remains an unavoidable reality. Furthermore, comparing participants to nonparticipants is the exception rather than the rule; information on nonparticipants is rare in mortality followback surveys [13,20]. When this analysis is conducted, the adopted statistical methods are usually not consistent [2124] or not clear [1,2527]. It is therefore not surprising that factors such as ethnicity [21,28], age [21,27,29], relationship to deceased [21], gender [29], place of death [22], interval from time to death [22] and social deprivation [29] do not show a consistent association with participation across studies with bereaved relatives, even when the studied population and survey methods used are similar. Crucially, the use of multivariate analysis to adjust for potential confounders has been rarely applied to examine nonresponse in this type of survey.

In addition to methodological issues, a key ethical concern in mortality followback surveys is whether participants and nonparticipants are being harmed by research. Current evidence suggests that most participants find it beneficial to take part and that many feel good about having the opportunity to help others in similar situations [3034]. Those who do not participate, however, might have different views. A few studies have investigated the reasons why some bereaved relatives do not take part in mortality followback surveys. Their results suggest that grief and strong emotions [22,3539] are common refusal reasons. Others include being “too busy” [36,37,3942], not knowing the deceased [35,43], being too ill [36,39], or not being interested [3942]. A better understanding of why people refuse to take part is still needed, as the knowledge can help identify areas in need of improvement in order to increase RRs.

This study aims to determine factors associated with participation in a cancer mortality followback postal survey and to systematically investigate reasons for refusing to take part. It also determines factors associated with providing an active refusal as opposed to a silent refusal (i.e. those who did not contact the research team to refuse participation and did not return a completed questionnaire). With this knowledge, we are able to identify underrepresented groups and ways to improve overall response in postal bereavement surveys.

Materials and Methods

1 Study design and setting

The QUALYCARE study was a mortality followback postal survey aiming to examine variations in the quality and costs of end-of-life care, preferences and palliative outcomes associated with death at home in cancer. The study took place in four health regions in London, United Kingdom (UK) with contrasting cancer home death rates and deprivation levels. These were chosen based on ecological analysis. The study protocol can be found elsewhere [44]. This analysis of the QUALYCARE dataset focuses on non-response, a key component of the study which was pre-specified in the QUALYCARE protocol.

2 Participants and sampling

The study was conducted with proxies for people ≥18 who lived in one of the four health regions and died from cancer between 5th March 2009 and 4th March 2010 (one year period). The deceased were identified from death registrations and the proxy was the person who registered the death (i.e. the informant of death). The Office for National Statistics (ONS) conducted the sampling in two waves (Nov 2009 and May 2010). They selected all deaths registered 4–10 months prior to when the survey would be sent by post and screened for further eligibility criteria (deceased aged ≥18, resident in one of the four included health regions, cancer as the underlying cause of death). The sample was stratified by health region and place of death. For each health region we included all deaths that took place at home, in hospices and in nursing homes. We then drew a random sample of 150 hospital deaths per health region or took all hospital deaths if the number was below 150. The latter happened in the two smallest regions. A random sample of hospital deaths was drawn in the larger health regions to ensure that hospital deaths did not drown out deaths taking place in other settings (the latter were all included to ensure they were well represented within the sample size required). In 2009 and 2010 over 40% of cancer deaths (44.0% and 42.0% respectively) in England and Wales occurred in a public hospital [45,46]. Hospital cancer deaths are also more likely in urban areas such as London [47]. Cases where the death took place in other places or when the place of death was unknown, and deaths registered by coroners were excluded. Questionnaire packs were sent in two waves (January and July 2010) to cover the one year period.

The sample size needed for the original objectives of the QUALYCARE study was calculated based on Altman’s methods [48]; a total sample of ~500 participants was needed to enable powered comparisons on preferences for place of death, help of community nursing and satisfaction with GP care by place of death for the most detailed level of analysis (a case-control of home vs. hospital deaths, which required ~350 participants). The RR was estimated at 38% with an extra 10% of missing data. Further information is available elsewhere [44].

3 Recruitment

The ONS sent each of the eligible participants a questionnaire pack by post following an “opt-out” recruitment approach. This was done on behalf of the research team as it did not have access to patients or informants of death’s names and addresses. The pack included a personalised invitation letter (with the potential participant’s name and address); an information leaflet tailored to the specific health region explaining the purpose of the study and providing the research team’s address and telephone number; a reply slip and a small freepost envelope (addressed to the ONS) for people to refuse to take part if that was their wish; a leaflet produced by the Royal College of Psychiatrists with information about bereavement and sources of support [49]; the study questionnaire (numbered, gender-specific and with a coloured cover) and a large freepost envelope for returning the questionnaire to the research team.

Receiving a completed questionnaire was considered as consent to participate. Up to two reminders were sent to people who did not respond at two and four weeks after the initial posting; the second reminder included another copy of the questionnaire. Those who had sent their refusals were coded as nonparticipants and did not receive any further reminders.

4 Data collection

Data were collected by using a purpose-built 44-page questionnaire (available from the research team), a reply slip in the questionnaire pack and a standardised call recording form which was completed for each telephone call answered by the research team (S1 and S2 Figs). Further pseudo-anonymised socio-demographic data were provided by the ONS.

4.1 The questionnaire.

QUALYCARE followed methods by Cartwright, McCarthy and Addington-Hall for national surveys in England on experiences of care in the last year of life [44]. The questionnaire included four other tools: the Client Service Receipt Inventory (CSRI), the Palliative care Outcome Scale (POS), a health status measure (EuroQoL EQ-5D-3L) and the Texas Revised Inventory of Grief (TRIG). Relevant socio-demographic information about the patient and the proxy was collected in the questionnaire. The questionnaire was piloted using cognitive interviewing with 20 bereaved relatives recruited via the palliative care department of a hospital in London [34,50,51]. This helped to refine the methods and improve the questionnaire.

4.2 Reply slip and calls.

The reply slip had a box for potential participants to tick in case they did not wish to take part. It also stated that, although not required, it would be helpful for the researchers to know why they decided not to participate. The slip had space for open comments and no reasons were suggested. The research team was also available over the telephone during working hours to answer any queries.

4.3 Death registration information.

The ONS provided the research team with a pseudo-anonymised and encrypted dataset for the entire sample. This dataset had information on patient’s region of residence, patient’s age, days and months from both actual death and registration of death until the date when the questionnaire was sent, place of death, patient’s gender, patient’s country of birth, proxy’s relationship to the deceased, and underlying cause of death (based on the 10th revision of the International Classification of Diseases—ICD-10). Underlying cause of death referred to cancer deaths only (C00 to D48)) [52].

The ONS also provided overall Index of Multiple Deprivation (IMD) scores (2010) characterising the patient’s area of residence (according to Lower layer Super Output Areas). The overall IMD score is a single measure of multiple deprivation; it is a weighted area level aggregation of specific dimensions of deprivation (income, employment, education, health, among others) [53]. These were provided as overall scores and deciles, which were then recoded into quintiles. The first quintile represents the most deprived areas and the fifth quintile represents the least deprived areas.

5 Ethics

The study was approved by a NHS Research Ethics Committee (REC), namely the London—Dulwich REC, formerly King’s College London REC (ref no.:09/H0808/85). The R&D offices of the health regions included in the study were notified about the study. No approval was needed as the participants were not recruited through National Health Services. Access to and handling of the anonymised death registration data provided by the ONS was governed by a Data Access Agreement signed by the ONS and the Cicely Saunders Institute (where the researchers were based).

Returning a completed questionnaire was considered as consent to participate. The London-Dulwich REC approved this consent procedure. The research team had no access to participant identifiable data (this was only accessed by the ONS) unless this was willingly provided by participants at the end of the questionnaire (where they were asked if they wished to receive a short summary of the study results and if they were happy to be contacted by the research team).

The ‘opt out’ approach to recruitment was decided following debates with experts in ethics, end of life care bereavement researchers and clinicians, consultation of national guidance, analysis of previous studies and of findings from the pilot study. Participants were informed in the invitation letter that the study had an ‘opt-out’ approach, and there were detailed procedures to ensure confidentiality and security of personal data. Participants were assured in the invitation letter and also in the information leaflet that they were under no obligation to take part or answer any questions that they considered distressing. Procedures were followed to identify and handle distressed participants [44].

6 Statistical analysis

Descriptive statistics, univariate and multivariate analyses were performed with the software PASW Statistics for Windows 18.0 (IBM).

We report descriptive data on the number of completed questionnaires, cumulative RRs, distribution of RRs according to time of contact, reasons for refusing to take part in the study and socio-demographic characteristics from participants and nonparticipants. Response rates correspond to the number of returned questionnaires divided by the number of eligible people to whom questionnaires were sent by post.

Participants and nonparticipants were compared on socio-demographic variables by using descriptive statistics. We carried out univariate and multivariate logistic regression to identify factors associated with participation and factors associated with active refusal. Variables were chosen a priory based on factors which were previously associated with non-participation in the literature [21,22,2729].

Our analyses included the variables health region and place of death to control for any design effect associated with the fact that our sample was stratified by these two variables. We calculated unadjusted and adjusted odds ratios (OR and AOR respectively) with 95% confidence intervals (CIs). We also evaluated how the multivariate models fit the observed data (ROC curve, Nagelkerke R2, Hosmer-Lemeshow goodness-of-fit test). All potential explanatory variables that we measured were included in the models and entered simultaneously. Tests were two-tailed and p<0.05 was deemed significant in the final regression models. Cases with missing data were excluded.

All reasons for refusal were independently read by two researchers (both psychologists) and analysed line by line using content analysis. An inductive coding frame with categories and subcategories was derived from the data. After agreement was reached for all categories and subcategories, the same researchers independently coded each refusal. Kappa statistics were used for each subcategory to verify the raters’ level of agreement. Strength of agreement was assessed following Landis and Koch guidelines (almost perfect agreement 0.81–1.00; substantial 0.61–0.80; moderate 0.41–0.60; poor <0.00) [54]. All discrepancies were discussed and a final agreement was obtained by consensus in these cases.

Results

1 Response rates

We sent 1516 questionnaire packs and received 596 completed questionnaires, resulting in a 39.3% RR (Fig 1). We received 71.6% (n = 427) of the completed questionnaires during the first four weeks after these were sent by post (Fig 2). All but six questionnaires were returned 4 to 8 months (n = 324; 54.4%), 9 to 10 months (n = 169; 28.4%) or 11 to 12 months (n = 97; 16.3%) after the patient died. The median number of days from death to the return of a completed questionnaire was 232.50 days (interquartile range: 191.25–281.00). Response rates varied from 32.4% in heath region 2 to 46.6% in health region 1, and were highest in Wave 1 (41.2%) compared to Wave 2 (37.2%).

thumbnail
Fig 1. QUALYCARE flow diagram.

NHS, National Health Services.

https://doi.org/10.1371/journal.pone.0146134.g001

thumbnail
Fig 2. Cumulative response rates by wave (1, 2 and both) and type of participant (early, middle and late).

https://doi.org/10.1371/journal.pone.0146134.g002

2 Sample characteristics

60.5% of participants were women and the majority were either the patient’s sons or daughters (46.2%) or their spouse/partner (37.6%). There was a similar proportion of male (51.5%) and female (48.5%) patients represented in the survey. Patient’s median age at death was 77.0 [interquartile range 66.0–84.0]. The most common types of cancer were digestive (27.9%) and respiratory or intra-thoracic (21.5%) (Table 1).

thumbnail
Table 1. Characteristics of participants and nonparticipants.

https://doi.org/10.1371/journal.pone.0146134.t001

3 Factors associated with participation

When comparing participants with nonparticipants using univariate statistics, there was an under-representation of people living in more deprived areas except for the 4th quintile (OR ranged from 0.44 for the 1st quintile to 0.63 for the 2nd quintile) and deaths outside home (OR ranged from 0.58 for hospital to 0.70 for hospice) and cases in which the person who registered the death (proxy) was the patient’s son or daughter, brother or sister, niece/nephew or other (OR ranged from 0.35 for other to 0.58 for a son/daughter). On the other hand, we observed an over-representation of patients aged 90+ (OR 2.37; 95% CI:1.25–4.51) and proxies who were women (OR 1.72; 95% CI:1.38–2.14). There were no significant differences between participants and nonparticipants in patient’s country of birth, patient’s gender, type of cancer and days from death to contact by researchers (Table 2).

Multivariate analysis using logistic regression showed that patient’s age, place of death, proxy’s gender and relationship to patient were independently associated with participation (Table 2). The odds of taking part were highest if the patient was aged 90+ (AOR 3.48; 95%CI:1.52–8.00) and the proxy was a woman (AOR1.70; 95%CI:1.33–2.16). The odds of being a participant were lowest if the patient died in hospital (AOR 0.62; 95%CI:0.46–0.84) and the proxy was not a spouse/partner (except for brother/sister and parents; AOR ranged from 0.28 for other to 0.57 for a son/daughter).

4 Active refusal and associated factors

From the 920 nonparticipants, 348 (37.8%) actively refused to take part (either by calling the team, sending back the entire pack, posting a letter or the reply slip). The remaining nonparticipants (n = 572) did not contact the research team to refuse participation and did not return a completed questionnaire (i.e. the silent refusals) (Table 3). These corresponded to 539 (94.2%) non-participants who did not contact the research team in any way, 25 (4.4%) who called the research team stating they would try to complete the questionnaire but never managed to do so, three (0.5%) who informed the research team that they had passed the questionnaire on to someone else but the research team never received them and five (0.9%) questionnaire packs which were returned unopened because addressee had moved away (Fig 1). Eight potential participants who had sent a reply slip refusing participation changed their minds and later sent a completed questionnaire by post (these were included only in the participants’ group).

Similarly to the comparison between participants and nonparticipants, when comparing those who provided active refusals with the silent refusals using univariate analysis, proxies of patients who lived in the two most deprived areas (OR 0.55 for the 1st quintile and 0.57 for the 2nd quintile) or who lived in health region 2 (OR 0.62; 95% CI:0.42–0.91) or health region 4 (OR 0.62; 95% CI:0.41–0.93), proxies who were the patient’s son/daughter (OR 0.31; 95% CI:0.22–0.44) or other (OR 0.40; 95% CI:0.20–0.78) provided active refusals less often. Informants of patients aged 90+ (OR 2.77; 95% CI:1.16–6.59) and 80–89 (OR 2.10; 95% CI:1.07–4.13) more often actively refused to take part compared to informants of patients aged 20–49; female proxies were also more likely to provide active refusals (OR 1.84; 95% CI:1.38–2.45). Proxies of patients who were born overseas provided active refusals less often than proxies of patients who were born in the country (OR 0.51; 95% CI:0.35–0.72) (Table 4).

thumbnail
Table 4. Factors associated with giving an active refusal.

https://doi.org/10.1371/journal.pone.0146134.t004

Multivariate analysis using logistic regression showed that the patient’s age, proxies’ gender, relationship to patient and patient’s country of birth were independently associated with actively refusing to take part in the survey (Table 4). Informants of patients aged 70+ (AOR ranged from 3.42 for the 70–79 age group to 6.59 for the 90+ age group) and female proxies (AOR 1.58; 95%CI:1.14–2.20) were significantly more likely to give an active refusal. The odds were lower if the informant was a son/daughter (AOR 0.26; 95% CI:0.18–0.40), a grandchild (AOR 0.19; 95% CI:0.05–0.70) or other (AOR 0.25; 95% CI:0.11–0.56) or if patient was born overseas (AOR 0.49; 95%CI:0.32–0.77). Area social deprivation and health region lost statistical significance after adjusting for other factors. Timing since death, patient’s gender, place of death, and type of cancer were not associated with the provision of active refusals.

5 Reasons for refusal

From the 348 nonparticipants who actively refused to take part, 205 (58.9%) justified their decision. Of these, 80.0% wrote their reasons in the reply slip (n = 164), 17.6% (n = 36) gave reasons over the phone and 2.4% (n = 5) wrote a letter to the research team. In total, 350 refusal reasons were given, with 38.1% (n = 78) giving more than one reason (range 1 to 7 reasons).

Sixty-four percent of the nonparticipants who provided reasons for refusal were women; 75.6% were proxies of patients aged 70–90+ years old, 40.5% were informants for patients who had died in a hospital and 33.7% for patients who had died in a nursing home.

Through content analysis, we identified seven categories of reasons for refusal (study-related, proxy-related, grief-related, life-related, care-related, non-specific and other) and 34 subcategories (Table 5). Coding agreement between raters was almost perfect (≥.81) for 23 sub-categories, substantial (≥0.61 & ≤0.80) for eight sub-categories, moderate (≥0.41 & ≤0.60) for two and poor (<0.00) for one sub-category (‘feels nothing more to be said about the care received’).

The most common reasons were study-related (36.0% or 126 reasons). These referred to the informants feeling that the questions were not applicable to the patient’s case, the questionnaire was too long, questions were upsetting, or beliefs that the questionnaire would not make any difference. Proxy and grief related-reasons were the second most frequently mentioned reasons (each accounting for 25.1% of all reasons or 88 reasons each). Proxy-related reasons were provided by professionals who registered the death of a patient when there was no family around, relatives who felt they had no knowledge about the care received, those replying on behalf of their relatives who were the main carers, or the main carers themselves stating they were disabled/fragile and could not therefore complete the questionnaire. The most common grief-related reasons were from nonparticipants stating that they were still grieving and it would be too upsetting or painful to take part. Other reasons for refusal were life-related (4.6% of all reasons), such as being too busy to take part; care-related (2.3%) such as stating that patient received good care, and non-specific (4.9%). The latter referred to nonparticipants simply stating that they did not wish to take part. Less than 3.0% of reasons were coded as “other” and these are fully described in Table 5. Examples of nonparticipant quotes by main categories are shown in Table 6.

thumbnail
Table 6. Examples of reasons for refusal by main category*.

https://doi.org/10.1371/journal.pone.0146134.t006

Discussion

Using multivariate analysis of socio-demographic population-based data, the QUALYCARE study identified important factors associated with participation in a postal mortality followback survey. We found that deaths in hospital and younger adult patients were underrepresented. We also found that similar factors were associated with both participation and giving an active refusal, except for country of birth which was only significant for the latter. Women and proxies of older patients were more likely to participate in the survey and also more likely to actively refuse when they decided not to take part. Likewise, relatives other than the spouse/partner were less likely to participate and also less likely to actively refuse when the decided not to participate. Through content analysis, we identified reasons for refusal and found that these were often multidimensional. Although grieving was an important reason, other issues related to the study itself and its design were also prominent. This suggests scope for improvement and potential increase in RRs.

The study’s RR is low compared to bereavement studies carried out in Italy (65%RR) [22], the United States (65% cooperation rate) [1] and the two recent national bereavement surveys carried out in England (46% RR for each) [2,55], but it is in line with many others [2326,35,37,56,57] and 1% higher than we had estimated. Although we followed evidence-based strategies to increase RRs (e.g. follow-up contact, provision of second copy of questionnaire and use of a personalised questionnaire) [17], the topic was sensitive and we used a long questionnaire to meet our research aims. In addition, QUALYCARE was conducted in London, the UK city with the highest proportion of ethnic minorities [58]. All these factors were previously found to affect RRs [1,12,20]. Nonetheless, in the case of this analysis, non-response posed no challenges; on the contrary, it made the regression analyses easier because there was a reasonable number of “events” (i.e. non-participation) to do the modelling.

QUALYCARE was especially funded and designed to investigate the care provided to patients with cancer. This is a common cause of death (29% in 2013) in England and Wales [59] and was responsible for 25% of all deaths in Europe in 2010 [60], but as a consequence patients who died from conditions other than cancer were not included in the study. People dying from non-malignant conditions have a less predictable illness trajectory and are less likely to access palliative care services, for example [6163]. It is therefore possible that people’s experiences of care and participation in research differ from those affected by cancer. Any transferability of our results to conditions other than cancer needs to be carefully considered. Subsequent to the present study, Evans et al. have adapted the QUALYCARE methodology to survey the end of life care provided to people dying from non-malignant conditions [64]; their analysis of non-response will help shed light on the factors associated with participation, active refusals and reasons for not taking part in a similar mortality followback survey within this patient group.

1 Factors associated with participation and providing active refusal

Since London is an urban area with an ethnically diverse population caution is needed when generalising results to other areas. Another limitation is that QUALYCARE is a cross-sectional survey, and any found associations do not imply causality. Finally, our regression models were not able to explain much of the variance neither in participation nor in the provision of active refusals. Other factors that we did not include in our models (such as complicated grief or depression) could possibly have played a role.

1.1 Patient age.

We found that proxies of older patients were more likely to take part in the study and to give an active refusal. This supports previous findings that older age (from patients) is associated with increased odds of participation [21,24,2729]. Reasons for this are not fully understood. Perhaps death is less of a shock when patients are older and as a consequence people feel more able to take part (or to contact the research team to let them know they would not like to participate if that is the case). Our results suggest that QUALYCARE data might be more relevant to the care provided to older people. This could limit the sample representativeness, but can provide crucial evidence for end-of-life care, as populations in need get older [65].

1.2 Relationship to the deceased and proxy gender.

Relatives other than the spouse/partner were less likely to take part in our study, in contrast with a similar survey in Italy (ISDOC study) which found that sons and daughters were more likely to take part [22]. Family structures in Italy might help to explain these differences (e.g. strong family support and extended families living together) [66]. In the UK, carers looking after someone in the same household are more likely to be spouses or partners [67]. However, sons and daughters were the largest group among the informants of death in our study, and less likely to respond than spouses and partners. This result may be because sons and daughters may have a number of family responsibilities that precludes them from taking part in research as much as spouses and partners. They might also live in a different household (or be less involved in the care provided to the patient) compared to the patient’s spouse or partner. It is possible that the response rate would have improved if more spouses or partners were involved. However, cancer is a condition strongly associated with older age [68], so it is reasonable to expect that a large number of informants will be sons and daughters.

We also found that female proxies were more likely to participate, opposite to findings from other studies [22,27,29]. When women did not take part, they were more likely to provide active refusals. The association was present even after accounting for the relationship to the deceased. Research evidence on grief suggests that men and women grieve differently, and that men are less likely to talk about their feelings and experiences [69]; this might help explain our results.

1.3 Place of death.

Proxies of patients who died in hospital were less likely to return a completed questionnaire compared to proxies of patients who died at home. Our results are similar to those from the ISDOC study in Italy [22]; other studies in the UK also found higher response for proxies of patients who died at home [24,43,57]. None of these studies, however, discussed explanations and it is possible that the reasons are multifactorial. Importantly, in the main analysis of the QUALYCARE study we found that a home death was more peaceful for patients than death in hospital and resulted in less intense grief for proxies, after adjusting for confounders [70]. This is consistent with evidence stating that people experiencing fewer problems and having more positive views about the care received may be more likely to return a completed questionnaire [71]. It is also possible that proxies of patients who died at home knew more about the care provided and felt more able to contribute to the study.

1.4 Social deprivation and country of birth.

Social deprivation was associated with participation in univariate analysis, but lost statistical significance after adjusting for confounding factors. This might have occurred because people living in deprived areas are more likely to die at a younger age and in hospitals [72]. Both patient’s age and hospital death were associated with participation in our study. In our multivariate analysis investigating factors associated with active refusal, deprivation also lost statistical significance and this may be because patients born overseas (whose proxies were less likely to give active refusals) usually live in more deprived areas, especially in inner London [58]. Our findings suggest that other factors might play a bigger role than deprivation itself, but any conclusions need to be seen with caution and the topic warrants more research. Furthermore, deprivation is an area level variable rather than an individual level variable and there is the risk of ecological fallacy. Those living in a more deprived area are not necessarily deprived. Likewise, those who live in a less deprived area can actually be deprived. This is especially true in London, where health regions can have an unusual spatial distribution of both very affluent and very poor residents [73].

Proxies of patients who were born overseas (as opposed to being born in the UK/Ireland) were more likely to provide an active refusal. Almost half of the proxies of patients who were born overseas (139/287) were nonparticipants who did not actively refuse to take part. That means not only we have less knowledge about the care this patient group received, but we also do not know why their proxies did not participate. This could be due to several reasons (i.e. language barriers, different ways of grieving). Adding a note to the back of the questionnaire in different languages so potential participants know that they can have a translated version if required may be a helpful approach. Reaching out for these groups is a necessity in future research to help provide the best possible care for all, regardless of their country of origin or background.

1.5 Timing of contact.

Interestingly, timing of contact was not a significant factor in our study, contrary to findings from the ISDOC study in Italy [22]. This was the only study with bereaved relatives we could identify which also investigated the association of timing of contact with participation using multivariate analysis. Authors found that an increased interval from death to receiving the questionnaire made participation less likely. In QUALYCARE, although the majority of participants returned questionnaires 4–10 months after the patient’s death, 97 questionnaires were returned 11 to 12 months after the patient had died (six questionnaires were returned even later than that). Furthermore, eight potential participants who had refused to take part changed their minds and later returned a completed questionnaire. Perhaps this is a benefit of postal surveys; they give people more time to consider taking part than they would have on a face-to-face contact, and also more time to think about what the questionnaire requires and whether to respond to it.

2 Reasons for refusal

Our content analysis of reasons for refusal was limited by the comments we received. Since most of them were brief we were not able to carry out an in-depth analysis. Furthermore, although our study shed light on reasons why people did not take part, reasons for nonparticipation were unknown for the majority of nonparticipants (77.7% or 715 non-participants). This group includes 143 proxies who actively refused to take part without giving any reasons and 572 silent refusals. Their reasons might differ from the ones we received. It is also worth noting that the reasons for refusal provided over the telephone were not transcribed verbatim as conversations were not recorded. It is possible that researcher bias happened as a consequence.

Our findings show that more than a third (37.8%) of nonparticipants provided an active refusal, and more than half (58.9%) of those justified their reasons. Including reply slips in questionnaire packs seems to be an effective strategy to understand better the reasons for not taking part in cancer mortality followback surveys.

As expected based on the literature, grief is an important reason for refusal [22,3639]; although in our study it was not the most common reason (accounting for a quarter of all reasons provided). This suggests that no matter how sensitively developed a questionnaire is, some potential participants will just not feel ready to respond. It is also possible that grief was underestimated as a refusal reason as people experiencing intense grief might be among those who did not contact the research team in any way.

Refusal reasons related to a perceived lack of knowledge about the care provided points to a limitation of using death registration data, since not all who register someone’s death have information about the care received by the patient. On the other hand, this suggests that the completed questionnaires can provide a more accurate picture of people’s experiences (since participants would have felt that they had sufficient knowledge to complete them). However, this also indicates that the study might be underrepresenting patients without informal carers or family around.

Proxy-related reasons also raise the issue about contacting informal carers who are sometimes fragile and disabled. In the context of an ageing population [65], this is likely to become a more common pattern in the future. The need to develop simpler, shorter questionnaires that are not burdensome to informal carers is then even more urgent. Questionnaire length was also highlighted as a study-related reason for refusal. Nonetheless, it is possible that a short questionnaire could be perceived by some as not doing enough justice to a life event as meaningful and salient as the end of life care received by a close one. Emphasising the importance of the study (and how it might benefit other patients and families in the future), working carefully on sensitive questions and leaving scope in the questionnaire for different care situations should also be a way forward. This can be challenging depending on the study aims (and when trying to develop shorter surveys).

3 Implications for research

Although the QUALYCARE study shed light on factors associated with participation while accounting for a number of potential confounders, further multivariate analyses of similar studies are still needed.

The need for more research is especially critical regarding the views and experiences of patients dying in hospitals and those without informal carers. Better representativeness might be achieved by using different proxies (such as care staff instead of close friends or relatives), by having questionnaires translated to different languages, being culturally sensitive when developing tools, among others. Targeted prospective studies instead of mortality followback surveys might also be more appropriate to reach out for these groups. Results from mortality followback surveys focused on non-cancer will be crucial to build evidence on non-response for proxies of patients with non-malignant conditions. Finally, investigating effective ways to reduce the questionnaire length in future studies may help to improve response rates.

Conclusion

In summary, our results show that proxies of older patients, female informants and patient’s spouses were more likely to take part in a postal mortality followback survey designed to assess quality of end of life care provided to patients dying from cancer, whilst patients dying in hospitals were underrepresented. We also had little information about reasons for non-participation from proxies of patients who were born overseas. Changes to the study design and methods might help to increase RRs (as study-related reasons were the most commonly mentioned reasons for refusal), but it is important to be aware that reasons for refusal are often multidimensional and that grief is a common reason that must be accounted for and respected.

We used a robust, validated methodology to survey bereaved relatives and had a powered sample of a population dying from cancer in a metropolitan area in the UK. Our findings add much needed evidence to the field of end-of-life care studies with bereaved relatives. However, we still need further powered studies that use robust research methods. We also need to investigate different approaches which might be more appropriate to reach underrepresented groups.

Supporting Information

S2 Fig. QUALYCARE study call recording form.

https://doi.org/10.1371/journal.pone.0146134.s002

(TIF)

S1 Table. Completed Strobe checklist for cross-sectional studies.

https://doi.org/10.1371/journal.pone.0146134.s003

(DOC)

Acknowledgments

We are most grateful to the research participants and nonparticipants for taking part, considering taking part and providing reasons for not doing so. We thank Cicely Saunders International for their support. The study involved many collaborators, to whom we are indebted: Professor Paul McCrone and Dr Sue Hall at King’s College London, the Department of Palliative Care at the Royal Marsden Hospital (Dr Julia Riley, Meena Valambhia); Dr Elizabeth Davies and Peter Madden at the Thames Cancer Registry; Prof Mike Richards and Tessa Ing on behalf of the Department of Health; the ONS Health Analysis team (Myer Glickman, Peter Davies, Stephen Rowlands, Justine Pooley); the Islington, Westminster, Bromley, and Sutton & Merton Primary Care Trusts (Nada Lemic, Clare Henderson, Jacqui Lindo, Ursula Daee, end of life care groups, communication teams); the South East London Cancer Research Network (Kerry Hylands for data entry and governance advice). We thank the members of the Project Steering Group, patient/family representatives (Nell Dunn, Kirstie Newson), the International Scientific Expert Panel from Cicely Saunders International, Brenda Ferns, and external advisors (Dr Anita Patel, Professor Colin Murray Parkes, Dr Joana Cadima, Dr Massimo Costantini). We also thank local services who helped us understand better the findings, including staff at St. Christopher’s Hospice, the ELiPSe team, and colleagues at King’s College London (Jonathan Koffman, Emma Murphy, Fliss Murtagh, Marjolein Gysels, Claudia Bausewein, Thomas Osborne, Lucy Selman, Vera Sarmento, Gao Wei, amongst others). Thanks to the Calouste Gulbenkian Foundation for patience and encouragement. We are also grateful to Professors Gunn Grande and Christine Ingleton for their insightful comments.

Author Contributions

Conceived and designed the experiments: BG IJH JK. Performed the experiments: BG NC. Analyzed the data: BG NC. Contributed reagents/materials/analysis tools: BG NC IJH JK. Wrote the paper: BG NC IJH JK.

References

  1. 1. Teno JM, Clarridge BR, Casey V, Welch LC, Wetle T, Shield R, et al. (2004) Family perspectives on end-of-life care at the last place of care. JAMA 291: 88–93. pmid:14709580
  2. 2. Office for National Statistics National Bereavement Survey (VOICES), 2013. Office for National Statistics. Available: http://www.ons.gov.uk/ons/dcp171778_370472.pdf.
  3. 3. Miyashita M, Morita T, Hirai K (2008) Evaluation of end-of-life cancer care from the perspective of bereaved family members: the Japanese experience. Journal of Clinical Oncology 26: 3845–3852. pmid:18688051
  4. 4. Beccaro M, Costantini M, Giorgi Rossi P, Miccinesi G, Grimaldi M, Bruzzi P (2006) Actual and preferred place of death of cancer patients. Results from the Italian survey of the dying of cancer (ISDOC). Journal of Epidemiology & Community Health 60: 412–416.
  5. 5. Teno JM (2005) Measuring End-of-Life Care Outcomes Retrospectively. Journal of Palliative Medicine 8: s42–s49. pmid:16499468
  6. 6. Addington-Hall J (2002) Research sensitivities to palliative care patients. European Journal of Cancer Care 11: 220–224. pmid:12296842
  7. 7. Parkes CM (1995) Guidelines for conducting ethical bereavement research. Death Studies 19: 171–181. pmid:11652995
  8. 8. Phipps EJ (2002) What's end of life got to do with it? Research ethics with populations at life's end. Gerontologist 42: 104–108. pmid:12415140
  9. 9. Stroebe M, Stroebe W, Schut H (2003) Bereavement research: methodological issues and ethical concerns. Palliative Medicine 17: 235–240. pmid:12725476
  10. 10. Williams BR, Woodby LL, Bailey FA, Burgio KL (2008) Identifying and responding to ethical and methodological issues in after-death interviews with next-of-kin. Death Studies 32: 197–236. pmid:18705168
  11. 11. Barriball KL, While AE (1999) Non-response in survey research: a methodological discussion and development of an explanatory model. Journal of Advanced Nursing 30: 677–686. pmid:10499225
  12. 12. Groves RM, Peytcheva E (2008) The impact of nonresponse rates on nonresponse bias—A meta-analysis. Public Opinion Quarterly 72: 167–189.
  13. 13. MacDonald SE, Newburn-Cook CV, Schopflocher D, Richter S (2009) Addressing nonresponse bias in postal surveys. Public Health Nursing 26: 95–105. pmid:19154197
  14. 14. Asch DA, Jedrziewski MK, Christakis NA (1997) Response rates to mail surveys published in medical journals. Journal of Clinical Epidemiology 50: 1129–1136. pmid:9368521
  15. 15. Edwards P, Roberts I, Clarke M, DiGuiseppi C, Pratap S, Wentz R, et al. (2002) Increasing response rates to postal questionnaires: systematic review. BMJ 324: 1183. pmid:12016181
  16. 16. Edwards P, Roberts I, Sandercock P, Frost C (2004) Follow-up by mail in clinical trials: does questionnaire length matter? Controlled Clinical Trials 25: 31–52. pmid:14980747
  17. 17. Edwards PJ, Roberts I, Clarke MJ, DiGuiseppi C, Wentz R, Kwan I, et al. (2009) Methods to increase response to postal and electronic questionnaires. Cochrane Database of Systematic Reviews.
  18. 18. Scott P, Edwards P (2006) Personally addressed hand-signed letters increase questionnaire response: a meta-analysis of randomised controlled trials. BMC Health Services Research 6: 111. pmid:16953871
  19. 19. Koopman L, Donselaar LC, Rademakers JJ, Hendriks M (2013) A prenotification letter increased initial response, whereas sender did not affect response rates. Journal of Clinical Epidemiology 66: 340–348. pmid:23347856
  20. 20. Cartwright A (1986) Who responds to postal questionnaires? Journal of Epidemiology & Community Health 40: 267–273.
  21. 21. Casarett D, Smith D, Breslin S, Richardson D (2010) Does nonresponse bias the results of retrospective surveys of end-of-life care? Journal of the American Geriatrics Society 58: 2381–2386. pmid:21087223
  22. 22. Costantini M, Beccaro M, Merlo F (2005) The last three months of life of Italian cancer patients. Methods, sample characteristics and response rate of the Italian Survey of the Dying of Cancer (ISDOC). Palliative Medicine 19: 628–638. pmid:16450880
  23. 23. Ingleton C, Morgan J, Hughes P, Noble B, Evans A, Clark D (2004) Carer satisfaction with end-of-life care in Powys, Wales: a cross-sectional survey. Health & Social Care in the Community 12: 43–52.
  24. 24. Jacoby A, Lecouturier J, Bradshaw C, Lovel T, Eccles M (1999) Feasibility of using postal questionnaires to examine carer satisfaction with palliative care: a methodological assessment. South Tyneside MAAG Palliative Care Study Group. Palliative Medicine 13: 285–298. pmid:10659098
  25. 25. Burt J, Shipman C, Richardson A, Ream E, Addington-Hall J (2010) The experiences of older adults in the community dying from cancer and non-cancer causes: a national survey of bereaved relatives. Age & Ageing 39: 86–91.
  26. 26. Hanratty B (2000) Palliative care provided by GPs: the carer's viewpoint. British Journal of General Practice 50: 653–654. pmid:11042919
  27. 27. Seeman I, Poe GS, McLaughlin JK (1989) Design of the 1986 National Mortality Followback Survey: considerations on collecting data on decedents. Public Health Reports 104: 183–188. pmid:2495553
  28. 28. Kross EK, Engelberg RA, Shannon SE, Curtis JR (2009) Potential for response bias in family surveys about end-of-life care in the ICU. Chest 136: 1496–1502. pmid:19617402
  29. 29. Addington-Hall J, McCarthy M (1995) Regional Study of Care for the Dying: methods and sample characteristics. Palliative Medicine 9: 27–35. pmid:7719516
  30. 30. Beck AM, Konnert CA (2007) Ethical issues in the study of bereavement: the opinions of bereaved adults. Death Studies 31: 783–799. pmid:17886410
  31. 31. Cook AS, Bosley G (1995) The experience of participating in bereavement research: stressful or therapeutic? Death Studies 19: 157–170. pmid:11652994
  32. 32. Kendall M, Harris F, Boyd K, Sheikh A, Murray SA, Brown D, et al. (2007) Key challenges and ways forward in researching the "good death": qualitative in-depth interview and focus group study. BMJ 334: 521. pmid:17329313
  33. 33. Emanuel EJ, Fairclough DL, Wolfe P, Emanuel LL (2004) Talking with terminally ill patients and their caregivers about death, dying, and bereavement: is it stressful? Is it helpful? Archives of Internal Medicine 164: 1999–2004. pmid:15477434
  34. 34. Koffman J, Higginson IJ, Hall S, Riley J, McCrone P, Gomes B (2012) Bereaved relatives' views about participating in cancer research. Palliative Medicine 26: 379–383. pmid:21606127
  35. 35. Addington-Hall J, West P, Karlsen S, West M (1999) Care in the last year of life in Lambeth, Southwark and Lewisham: final report. King's College London: Department of Palliative Care & Policy.
  36. 36. Curtis J, Patrick D, Engelberg R, Norris K, Asp C, Byock I (2002) A measure of the quality of dying and death. Initial validation using after-death interviews with family members. Journal of Pain & Symptom Management 24: 17–31.
  37. 37. Escobar Pinzon LC, Munster E, Fischbeck S, Unrath M, Claus M, Martini T, et al. (2010) End-of-life care in Germany: Study design, methods and first results of the EPACS study (Establishment of Hospice and Palliative Care Services in Germany). BMC Palliative Care 9: 16. pmid:20673326
  38. 38. Skulason B, Helgason AR (2010) Identifying obstacles to participation in a questionnaire survey on widowers' grief. BMC Palliative Care 9: 7. pmid:20429883
  39. 39. Neal M, Carder P, Morgan D (1996) Use of Public Records to Compare Respondents and Nonrespondents in a Study of Recent Widows. Research on Aging 18:219–242.
  40. 40. Stroebe MS, Stroebe W (1989) Who participates in bereavement research? A review and empirical study. Omega Journal of Death and Dying 20: 1–29.
  41. 41. Caserta M, Lund D, de Vries B, Utz R, Bearnson K (2010) Dual Process Intervention for Recently Bereaved Spouses (R01 AG023090). Final Report: National Institute on Aging, National Institutes of Health, Bethesda, MD.
  42. 42. Lund D, Caserta M, Dimond M (1993) The course of spousal bereavement in later life. In: Stroebe M, Stroebe W, Hansson R, editors. Handbook of bereavement: Theory, research and intervention. London: Cambridge University Press. pp. 240–254.
  43. 43. Hunt K, Shlomo N, Richardson A, Addington-Hall J (2011). VOICES Redesign and Testing to Inform a National End of Life Care Survey. Final Report for the Department of Health: Department of Health. Available: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/215503/dh_128825.pdf
  44. 44. Gomes B, McCrone P, Hall S, Koffman J, Higginson IJ (2010) Variations in the quality and costs of end-of-life care, preferences and palliative outcomes for cancer patients by place of death: the QUALYCARE study. BMC Cancer 10: 400. pmid:20678203
  45. 45. Office for National Statistics (2010). Deaths Registered in England and Wales, 2009. Table 12—Deaths: place of occurrence and sex by underlying cause and age group. Available: http://www.ons.gov.uk/ons/publications/re-reference-tables.html?edition=tcm%3A77-199137
  46. 46. Office for National Statistics (2011). Deaths Registered in England and Wales, 2010. Table 12—Deaths: place of occurrence and sex by underlying cause and age group. Available: http://www.ons.gov.uk/ons/publications/re-reference-tables.html?edition=tcm%3A77-230730
  47. 47. Gomes B, Higginson IJ (2008) Where people die (1974–2030): past trends, future projections and implications for care. Palliative Medicine 22: 33–41. pmid:18216075
  48. 48. Altman DG (1991) Practical statistics for medical research. London: Chapman and Hall.
  49. 49. Timms P Bereavement. RCPsych Public Education Editorial Board. Available: http://www.rcpsych.ac.uk/expertadvice/problems/bereavement/bereavement.aspx.
  50. 50. Higginson I, Hall S, Koffman J, Riley J, Gomes B (2010) Time to get it right: are preferences for place of death more stable than we think? Palliative Medicine 24: 352–353. pmid:20145092
  51. 51. Gomes B, McCrone P, Hall S, Riley J, Koffman J, Higginson IJ (2013) Cognitive interviewing of bereaved relatives to improve the measurement of health outcomes and care utilisation at the end of life in a mortality followback survey. Supportive Care in Cancer: 1–10.
  52. 52. World Health Organisation (1992). ICD-10 Classifications of Mental and Behavioural Disorder: Clinical Descriptions and Diagnostic Guidelines. Geneva: World Health Organisation. Available: http://www.who.int/classifications/icd/en/bluebook.pdf
  53. 53. McLennan D, Barnes H, Noble M (2011). The English Indices of Deprivation 2010: Technical Report. Social Disadvantage Research Centre, Oxford Institute of Social Policy, University of Oxford: Department for Communities and Local Government. Available: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/6320/1870718.pdf
  54. 54. Landis J, Koch G (1977) The measurement of observer agreement for categorical data. Biometrics 33: 159–174. pmid:843571
  55. 55. Office for National Statistics National Bereavement Survey (VOICES), 2012. Office for National Statistics. Available: http://www.ons.gov.uk/ons/dcp171778_317495.pdf.
  56. 56. Addington-Hall J, Walker L, Jones C, Karlsen S, McCarthy M (1998) A randomised controlled trial of postal versus interviewer administration of a questionnaire measuring satisfaction with, and use of, services received in the year before death. Journal of Epidemiology & Community Health 52: 802–807.
  57. 57. Grande GE, Ewing G, National Forum for Hospice at Home (2009) Informal carer bereavement outcome: relation to quality of end of life support and achievement of preferred place of death. Palliative Medicine 23: 248–256. pmid:19251831
  58. 58. Department for Communities and Local Government (2009). Transforming Places; Changing Lives: taking forward the regeneration framework Equality Impact Assessment. London: Department for Communities and Local Government: London. Available: http://webarchive.nationalarchives.gov.uk/20120919132719/http://www.communities.gov.uk/documents/regeneration/pdf/1227742.pdf
  59. 59. Office for National Statistics (2014) Deaths Registered in England and Wales, 2013. Table 12—Deaths: place of occurrence and sex by underlying cause and age group‚ 2013. Office for National Statistics. Available: http://www.ons.gov.uk/ons/publications/re-reference-tables.html?edition=tcm%3A77-327590.
  60. 60. Eurostat (2014). Health. Available: http://ec.europa.eu/eurostat/documents/3217494/5786213/KS-HA-14-001-02-EN.PDF/68e057e3-8ff3-4178-9615-d13196f6d50a?version=1.0
  61. 61. Gill TM, Gahbauer EA, Han L, Allore HG (2010) Trajectories of Disability in the Last Year of Life. New England Journal of Medicine 362: 1173–1180. pmid:20357280
  62. 62. Teno JM, Weitzen S, Fennell ML, Mor V (2001) Dying trajectory in the last year of life: does cancer trajectory fit other diseases? J Palliative Medicine 4: 457–464. pmid:11798477
  63. 63. Grande G, Ewing G (2008) Death at home unlikely if informal carers prefer otherwise: implications for policy. Palliative Medicine 22: 971–972. pmid:18952753
  64. 64. Evans C, Bone A, Yi D, Wei G, Gomes B, Maddocks M, et al. (2015) Factors associated with end of life transition for older adults living at home: analysis of carers' post-bereavement survey. BMJ Supportive & Palliative Care 5: 116.
  65. 65. Gomes B, Cohen J, Deliens L, Higginson IJ (2011) International trends in circumstances of death and dying. In: Gott M, Ingleton C, editors. Living with Ageing and Dying Palliative and end of life care for older people. 1st ed. Oxford: Oxford University Press. pp. 3–18.
  66. 66. Meñaca A, Evans N, Andrew EV, Toscani F, Finetti S, Gómez-Batiste X, et al. End-of-life care across Southern Europe: a critical review of cultural similarities and differences between Italy, Spain and Portugal. Critical Reviews in Oncology-Hematology 82: 387–401.
  67. 67. The Health and Social Care Information Centre Survey of Carers in Households 2009/10. The NHS Information Centre. Available: http://www.hscic.gov.uk/catalogue/PUB02200/surv-care-hous-eng-2009-2010-rep1.pdf.
  68. 68. Calanzani N, Higginson IJ, Gomes B (2013). Current and future needs for hospice care: an evidence-based report. London: Commission into the Future of Hospice Care. Available: http://www.helpthehospices.org.uk/EasysiteWeb/getresource.axd?AssetID=128710&servicetype=Attachment
  69. 69. Parkes CM, Prigerson HG (2010) Determinants of grief I: Kinship, gender and age. In: Parkes CM, Prigerson HG, editors. Bereavement Studies of grief in adult life. 4th ed. London: Penguin Books. pp. 137–151.
  70. 70. Gomes B, Calanzani N, Koffman J, Higginson IJ (2015) Is dying in hospital better than home in incurable cancer and what factors influence this? A population-based study. BMC Medicine 13:235. pmid:26449231
  71. 71. Perneger TV, Chamot E, Bovier PA (2005) Nonresponse bias in a survey of patient perceptions of hospital care. Medical Care 43: 374–380. pmid:15778640
  72. 72. Pring A, Verne J (2012). Deprivation and death: Variation in place and cause of death. South West Public Health Observatory. National End of Life Care Intelligence Network. Available: http://www.endoflifecare-intelligence.org.uk/view?rid=254
  73. 73. NHS Islington Annual Public Health Report 2010. Understanding the gap: improving life expectancy in Islington. NHS Islington. Available: http://www.islington.gov.uk/publicrecords/library/Public-health/Quality-and-performance/Reporting/2012-2013/%282013-01-18%29-APHR-2010-Full-report.pdf.