Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Latent classes associated with the intention to use a symptom checker for self-triage

  • Stephanie Aboueid ,

    Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Resources, Validation, Writing – original draft

    seaboueid@uwaterloo.ca

    Affiliation School of Public Health and Health Systems, University of Waterloo, Waterloo, Ontario, Canada

  • Samantha B. Meyer,

    Roles Investigation, Methodology, Resources, Supervision, Writing – review & editing

    Affiliation School of Public Health and Health Systems, University of Waterloo, Waterloo, Ontario, Canada

  • James Wallace,

    Roles Methodology, Resources, Supervision, Writing – review & editing

    Affiliation School of Public Health and Health Systems, University of Waterloo, Waterloo, Ontario, Canada

  • Ashok Chaurasia

    Roles Formal analysis, Investigation, Methodology, Resources, Software, Supervision, Writing – review & editing

    Affiliation School of Public Health and Health Systems, University of Waterloo, Waterloo, Ontario, Canada

Abstract

It is currently unknown which attitude-based profiles are associated with symptom checker use for self-triage. We sought to identify, among university students, attitude-based latent classes (population profiles) and the association between latent classes with the future use of symptom checkers for self-triage. Informed by the Technology Acceptance Model and a larger mixed methods study, a cross-sectional survey was developed and administered to students (aged between 18 and 34 years of age) at a University in Ontario. Latent class analysis (LCA) was used to identify attitude-based profiles that exist among the sample while general linear modeling was applied to identify the association between latent classes and future symptom checker use for self-triage. Of the 1,547 students who opened the survey link, 1,365 did not use a symptom checker in the past year and were thus identified as “non-users”. After removing missing data (remaining sample = n = 1,305), LCA revealed five attitude-based profiles: tech acceptors, tech rejectors, skeptics, tech seekers, and unsure acceptors. Tech acceptors and tech rejectors were the most and least prevalent classes, respectively. As compared to tech rejectors, tech seekers and unsure acceptors were the latent classes with the highest and lowest odds of future symptom checker use, respectively. After controlling for confounders, the effect of latent classes on symptom checker use remains significant (p-value < .0001) with the odds of future use in tech acceptors being 5.6 times higher than the odds of future symptom checker use in tech rejectors [CI: (3.458, 9.078); p-value < .0001]. Attitudes towards AI and symptom checker functionality result in different population profiles that have different odds of using symptom checkers for self-triage. Identifying a person’s or group’s membership to a population profile could help in developing and delivering tailored interventions aimed at maximizing use of validated symptom checkers.

Introduction

Unnecessary care and delaying seeking care are two factors that contribute to higher system costs [13]. One way to economize the healthcare system is to provide patients with reliable tools to inform better decisions on when to seek care [1, 4]. Symptom checkers, especially those involving artificial intelligence, have provided a means for users to self-triage (self-assess whether or not they should seek medical care) [5, 6]. Examples of these platforms include Babylon Health, the Ada health app, and the K Health app. Although there are hundreds of symptom checkers available for public use, the literature surrounding the use of this technology remains scarce [7, 8]. It is unclear, for example, whether population groups accept or use this technology as well as the group profiles more likely to accept such a technology.

Research on individual acceptance and use of information technology is one of the most established streams of research in information systems [9]. Stemming from theories in social-psychological and behavioural literature, mainly the Theory of Planned Behavior [10], the Technology Acceptance Model (TAM) outlines various factors to explain an individual’s decision to adopt and use a technology [11]. TAM states that behavioural intention, the most proximal santecedent to actual technology use, is influenced by individuals’ attitude, which in turn, is influenced by two key constructs: perceived usefulness (PU) and perceived ease of use (PEOU) of the technology [11]. Over time, researchers have applied the TAM to identify factors associated with the use of various types of technologies, in different settings, while targeting diverse population groups. The growing body of knowledge in the field contributed to the development of a refined model, the Unified Theory of Acceptance and Use of Technology (UTAUT) [12].

Most studies applying the TAM and UTAUT frameworks, however, have studied the effect of individual factors on technology use, none of which focused on symptom checkers [8, 12]. For example, higher trust in technology has been shown to be associated with increased technology use, but it is unclear if the co-occurrence of high trust with other attitude-based variables may affect this association. As such it is unclear how a group of variables co-exist and in turn, explain acceptance and use of such symptom checkers. To address this gap, latent Class Analysis (LCA), a statistical and probabilistic method introduced in the 1950s [13], can be used to classify individuals from a heterogeneous group into smaller more homogenous unobserved subgroups [14]. Examples of LCA applications include identifying classes based on Internet searching behaviours among older adults [15], an attitude-based segmentation of mobile phone users [16], and identifying patterns of technology and interactive social media use among adolescents [17]. While there are various possible bases to use in segmentation analysis (e.g., ranging from demographic data to lifestyle-related bases), attitudes have been suggested as a useful basis as they take into account a more affective dimension of consumers’ choices and have a better ability to describe behaviour [18, 19].

Little is known about the types of attitude-based population profiles that exist as well as how they are associated with the use of symptom checkers. Addressing this gap has key practical implications for health systems and population health interventions which seek to increased adoption and use of such platforms by the population. The target population in this study were university students as they are typically young adults–a group known to be eager adopters of technology; as such, they are the ideal target for such digital platforms and may contribute to maximizing symptom checker use [20]. The objective of this study was to identify attitude-based latent classes (population profiles) and the association of each of these latent classes with the future use of symptom checkers for self-triage.

Materials and methods

We conducted a cross-sectional survey-based study that targeted young adults (between the ages of 18 and 34 years of age) enrolled at the university of Waterloo, a public research university with six faculties. Prior to participant recruitment, ethics clearance was granted from the Research Ethics Board at the University of Waterloo (#41366). Participant recruitment occurred through an email invitation sent by the University Registrar’s office and a link posted on the Graduate news webpage. In addition to being approved by the Ethics board, the survey email invitation was also submitted to and approved by the Institute of Analysis and Planning. Consent was obtained from participants through the survey. Data collected cannot be shared for confidentiality purposes.

The survey used in this study (S1 Appendix) was developed and reviewed in collaboration with a Survey Research Center (SRC) at the affiliated University. The SRC is comprised of experts in survey design and methodology who work in developing expertise in rigorous and specialized research. Survey development began in August and was finalized the same year, in December 2020. Survey questions were informed by the literature and adapted for the target population and technology of interest (S2 Appendix). Moreover, to reduce respondent burden, not all factors included in the UTAUT were measured in the survey. A shortlist of factors was developed based on the UTAUT model and a ranking exercise conducted with 22 participants from the same target population (i.e., university students) as part of semi-structured interviews–this list is included in S3 Appendix and findings from this work can be found elsewhere [21].

LCA was used on survey data to identify underlying latent variables based on observed measured categorical variables (i.e., trust, usefulness, credibility, demonstrability, output quality, perspectives about AI, ease of use, and accessibility). The selection of the best fitted latent class model(s) for attitudes towards symptom checker functionality and AI in health was based on key fit statistics and interpretability. For models assessing association between latent classes and future use, our General Linear Logit models considered various types of latent classes, and the best regression model was chosen based on model fits statistics and model interpretability.

Data set

A total of 35,643 undergraduate university students received an email invitation for the survey through the Registrar’s office. A total of 1,547 students complete the survey which was available online on January 11, 2021 and closed the following day. Respondents who clicked on the web survey link and did not complete the survey were classified as either screened out or a drop out. Respondents who were screened out were those not meeting the eligibility criterion of being between the ages of 18 and 34. There were 12 and 2 respondents who indicated they were under 18 or over the age of 34, respectively–they were deemed ineligible and screened out of the survey. Drop-outs were defined as respondents who clicked on the web survey link but did not complete the survey. There was a total of 558 dropouts with just over half (57%) having occurred at the introduction page with the rest of the dropouts occurred throughout the survey with most occurring within the first several questions. Given that the outcome of interest is the future use of symptom checkers, 180 respondents who had used symptom checkers in the past 12 months and were thus categorized as “users” were excluded from the analysis. The remaining sample (n = 1,365) who had not used the platform were identified as “non-users” and are the focus of this study.

Data analysis

All analyses were performed using SAS 9.4. Descriptive statistics and bivariate analyses were conducted to provide an overview of the sample. Items used to determine latent classes were coded with binary variables such that 1 denoted “no or neutral” and 2 denoted “yes”. PROC LCA was used to identify response patterns that define latent classes. In order to identify an optimal baseline model, the procedure was repeated for different numbers of latent classes [22]. Once latent class models were identified, relative model fit statistics were used to select the model that best describes the data. Model selection for best latent class model was based on goodness of fit measures such as Bayes Information Criterion (BIC) and entropy [23]. A low BIC value, a high entropy value, and interpretability of the classes informed our model selection [22]. General Logit Models were used for our nominal outcome of interest since the three categories do not have a natural order. Future use of symptom checkers was the outcome of interest with it having three categories and “neutral” as the referent categories and the two other categories (i.e., “yes” and “no”) compared with this referent. The “neutral” category was used as the referent as the interest was to understand the odds-like of using or not using symptom checkers in the future.

Results

Sample

Participants with missing data on key variables of interest were removed (n = 62). The sample (n = 1,305) of non-users is somewhat evenly split across men and women, non-white, enrolled in an undergraduate program, and often have access to the Internet. An overview of this sample in terms of demographics (gender, age, race), academic/professional environment (education level, faculty, employment status), self-perceived health, health literacy, healthcare access, healthcare use, healthcare use frequency, wait time, and healthcare need are shown in Table 1. The counts and percentages of the outcome variable and items used to determine latent classes are presented in Table 2.

thumbnail
Table 2. Descriptive statistics on the intent to use symptom checkers.

https://doi.org/10.1371/journal.pone.0259547.t002

Latent classes

Eight items (i.e., trust, usefulness, credibility, demonstrability, output quality, perspectives about AI, ease of use, and accessibility) were used for latent class modelling; as such, the number of latent class considered were K = 2, 3, … 7. Table 3 displays the fit statistics for the LCA for the top three models arising from K = 3,4, and 5 based on fit statistics and interpretability. These models had relatively lower BIC values and higher entropy as shown in Table 3.

Based on the fit statistic and interpretability, the five-class model was chosen. While the BIC and adjusted BIC were slightly higher for the five-class model as compared to the three- and four-class models, the entropy was higher as compared to the 4-class model. Importantly, the five-class model provides more detailed information regarding the classes that exist in the population with tech seekers being an important class that is in line with findings from the qualitative phase of this work which highlights the key barrier related to lack of perceived access to symptom checkers. An overview of the five classes are provided in Table 4.

thumbnail
Table 4. Five-latent-class model: Probability of positive perceptions for each subgroup.

https://doi.org/10.1371/journal.pone.0259547.t004

Similarly to the three- and four-latent class models, the first profile describes a group with positive attitudes towards various aspects of symptom checkers and were thusly labeled tech acceptors. The second group were the opposite, having a low probability of answering positively on any of the items assessed, and were labeled as tech rejectors. The third group had a mixed response pattern showcasing some negative perceptions, particularly related to trust, demonstrability, and output quality–this group was labeled as skeptics. The fourth subgroup (tech seekers) has positive perceptions related to all aspects of symptom checkers but do not find the platform to be accessible whereas the fifth group (unsure acceptors) does not perceive access to be an issue but rather have some negative perceptions about AI and other aspects of symptom checkers.

In terms of prevalence, tech acceptors and tech rejectors make up the biggest and smallest proportion across models, respectively. Skeptics are the second most prevalent group with additional granularity provided in models with additional classes.

Regression analysis

The GLM procedure in SAS was used to the fit the above General Logit Regression where the five attitude-based latent profiles serve as a predictor variable in regression models. We additionally ran the above models without confounders (i.e., gender, race, healthcare use, wait time, health literacy, and self-perceived health) for the purpose to assess whether the relationship between the main predictor and the outcome changes. Detailed outputs of these model are provided in S5 Appendix. As seen in Tables 5 and 6, it is noteworthy that the effect of latent classes on the future use of symptom checkers remained significant even after controlling for confounders; this highlights the strength of the association between latent classes and symptom checker use.

thumbnail
Table 5. Output for the five-class model without confounders.

https://doi.org/10.1371/journal.pone.0259547.t005

thumbnail
Table 6. Output for the five-class model with confounders.

https://doi.org/10.1371/journal.pone.0259547.t006

After controlling for confounders, the effect of latent classes on symptom checker use remains significant (p-value < .0001) with the odds of future use in tech acceptors being 5.6 times higher than the odds of future symptom checker use in tech rejectors [CI: (3.458, 9.078); p-value < .0001]. The odds of future use are 2.6 times higher in skeptics than the odds of future use in tech rejectors [CI: (1.491, 4.586); p-value = .0008]. The odds of future use are 7.6 times higher in tech seekers than the odds of future use in tech rejectors [CI: (4.276, 13.752); p-value = < .0001]. The odds of future use in unsure acceptors are 2 times higher than the odds of future use in tech rejectors [CI: (1.207, 3.584); p-value = .008]. In sum, being in a certain latent class is a significant predictor of future symptom checker use. Tech seekers and unsure acceptors were the latent classes with the highest and lowest odds of future symptom checker use, respectively.

Discussion

To our knowledge, our study is the first to merge the TAM and LCA literature to identify profiles among university students and regress these profiles on future symptom checker use. Interestingly, while young adults are perceived to be technology savvy, most of the participants recruited had not used a symptom checker in the past year–this may be due to the lack of awareness regarding the existence of these platforms [21]. Most had positive perspectives regarding the use of AI in health and symptom checkers’ functionality; however, some skepticism and issues related to perceived accessibility and functionality may hinder the future adoption and use of symptom checkers. Five distinct latent classes were identified: tech acceptors, tech rejectors, skeptics, unsure acceptors, and tech seekers. It is a noteworthy finding that the effect of latent classes remained significant even after controlling for confounders; this is not always the case since from a statistical perspective, the effect of a variable can lose its significance when controlling for other variables [24].

Previous studies have applied the TAM to identify the factors associated with the adoption and use of health apps and health technologies; for example, a study found that adolescents found wearable activity trackers to be useful, but the efforts required to use these technologies may influence overall engagement and technology acceptance [25]. In our study perceived ease of use was also found to play a role in defining latent classes and in turn, the latent class association with future use of symptom checkers. For example, tech rejectors and unsure acceptors did not perceive the use of symptom checkers to be easy which was evident by their lower odds of using symptom checkers in the future. While age was not explored in our study due to the young age of our sample, another study found that younger populations displayed more confidence with the use of mHealth apps and were less concerned about compromising the confidentiality of their health records [26]. Answers to TAM-related questions among mHealth apps users were significantly more positive compared with non-users [26]. Interestingly, as found in our study, the endorsement of health apps by health organizations can play an influential role in technology acceptance and utilization as well as support efforts in shaping regulation [26, 27].

Tech seekers and unsure acceptors had the highest and lowest odds of future symptom checker use, respectively. Interestingly, it was found that tech seekers (those who have positive perspectives related to symptom checker functionality and AI but do not perceive to have access to the technology) had the highest odds of future symptom checker use, even more so than tech acceptors (those who have positive perspectives related to all aspects and perceive to have access to the technology). This nuance was highlighted through five latent classes but lost when approaching the same objective with three or four latent classes. These classes could serve as a starting point in similar studies targeting other population groups.

This study has several strengths that relate to the technology studied, choice of target population, theoretical framework and methodological approach used, tools developed, and practical implications for key stakeholders in the public health arena. Firstly, the development and use of an interview protocol and survey will enable other researchers in the field to adapt and use these tools. This study also contributed to developing the literature on an understudied technology that has real potential in addressing key healthcare challenges. Symptom checkers, along with other digital platforms that allow for self-care, have been named as one of the top 10 emerging technologies in 2020 [28], and their importance has been accentuated during the COVID-19 pandemic [29]. Our study allowed for the identification of five latent classes that may need to be targeted differently to promote the use of promising symptom checkers.

Some limitations warrant mention. First, findings stem from a bounded case which is categorized by a sample that is highly educated and perceived to have a good health status thus limiting the transferability of findings to other populations with a wide range of age groups, education levels, self-perceived health, and health literacy. As such, additional studies targeting other population groups are needed. Moreover, selection bias may be present as those included in the study may be different than those who did not opt to participate; however, findings from this work could help reduce selection bias in future studies as it provides an overview of the profiles that may exist and thus, should be represented in the sample. While the study targeted adults between the ages of 18 and 34, most participants were between 18 and 24 suggesting that latent classes identified may differ if the sample was comprised of individuals in the higher age range. This study focused specifically on non-users with the intention to use a symptom checker being the outcome of interest; while data on “users” were collected, the sample size was too small highlighting that a higher sample size will be required to avoid underextraction of classes. Survey questions were not assessed for two psychometric measures (i.e., reliability and validity); however, questions were developed based on published studies and adapted for the target population and technology. Moreover, the survey was developed with assistance from the Survey Research Center; as such, best available practices were applied in survey design, administration, collection and curation.

Conclusion

Symptom checkers may not be as widely known by the population, even those considered to be eager adopters of technology. Within the university student population, profiles–characterized by their attitudes toward symptom checkers and AI–exist. Perceived ease of use and accessibility are key factors that explain some of the nuances across identified profiles. To maximize the use of validated symptom checkers and therefore, reduce unnecessary healthcare visits, targeted interventions could be developed and delivered depending on an individual’s or group’s identification to a certain profile. Future research is warranted to assess whether similar profiles exist among other population groups as well as which interventions (both at the health system and population health levels) would be best suited based on existing attitude-based variables.

Supporting information

S2 Appendix. Construct definitions and source of survey questions.

https://doi.org/10.1371/journal.pone.0259547.s002

(DOCX)

S3 Appendix. Number of participants choosing factors that are important for using a symptom checker for self-triage.

https://doi.org/10.1371/journal.pone.0259547.s003

(DOCX)

S4 Appendix. Interpretation of the three- and four-latent class models.

https://doi.org/10.1371/journal.pone.0259547.s004

(DOCX)

Acknowledgments

The authors would like to thank the university students who agreed to participate in the study as well as the administration staff at the University of Waterloo for aiding with participant recruitment.

References

  1. 1. Canadian Institute for Health Information. 2017. Unnecessary Care in Canada. cihi.ca. Accessed June 28, 2021.
  2. 2. Institute of Medicine. Committee on the Learning Health Care System in America. In: Smith M, Saunders R, Stuckhardt L, McGinnis JM, eds. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: National Academies Press (US), 2013.
  3. 3. Statistics Canada. 2016. Health at a glance: Difficulty accessing health care services in Canada. https://www150.statcan.gc.ca. Accessed June 30, 2021.
  4. 4. Choosing Wisely Canada. More is not always better backgrounder. https://choosingwiselycanada.org. Accessed July 16, 2021.
  5. 5. Hill MG, Sim M, Mills B. The quality of diagnosis and triage advice provided by free online symptom checkers and apps in Australia. Med J Aust 2020; 212 (11): 514–519. pmid:32391611
  6. 6. Semigran HL, Linder JA, Gidengil C, et al. Evaluation of symptom checkers for self diagnosis and triage: audit study. British Medical Journal 2015; 351:h3480. pmid:26157077
  7. 7. Aboueid S, Liu RH, Desta BN, et al. The Use of Artificially Intelligent Self-Diagnosing Digital Platforms by the General Public: Scoping Review. JMIR Med Inf 2019; 7(2):e13445. pmid:31042151
  8. 8. Tsai C-H, You Y, Gui X, et al. Exploring and promoting diagnostic transparency and explainability in online symptom checkers. CHI Conference on Human Factors in Computing Systems 2021; 152: 1–17.
  9. 9. Venkatesh V, Davis FD, Morris MG. Dead or Alive? The Development, Trajectory and Future of Technology Adoption Research. JAIS 2007; 8(4): 268–286.
  10. 10. Ajzen I. Attitues, personaliQ, and behavior. Chicago, IL: Dorsey, 1988.
  11. 11. Davis FD. Perceived usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly 1989; 13 (3): 319–340.
  12. 12. Venkatesh V, Thong JY, Xu X. Unified Theory of Acceptance and Use of Technology: A Synthesis and the Road Ahead. JAIS 2016; 17(5).
  13. 13. Lazarsfeld PF, Henry NW. Latent Structure Analysis. Houghton Mifflin, Boston, 1968.
  14. 14. Vermunt JK, Magidson J. Latent class models for classification. Comput Stat Data Anal 2003; Jan;41(3–4):531–7.
  15. 15. van Boekel LC, Peek ST, Luijkx KG. Diversity in Older Adults’ Use of the Internet: Identifying Subgroups Through Latent Class Analysis. JMIR 2017; 19(5):e180. pmid:28539302
  16. 16. Sell A, Mezei J, Walden P. An attitude-based latent class segmentation analysis of mobile phone users. Telemat. Inform. 2014; 31, 209–219.
  17. 17. Tang S, Patrick ME. A latent class analysis of adolescents’ technology and interactive social media use: Associations with academics and substance use. Hum. Behav. Emerg. Technol. 2019; 2(1): 50–60.https://doi.org/10.1002/hbe2.154
  18. 18. Wedel M, Kamakura W. Profiling Segments. Market Segmentation 200; 145–158.
  19. 19. Olsen SO, Prebensen NK, Larsen TA. Including ambivalence as a basis for benefit segmentaion: a study of convenience food in Norway. Eur. J. Mark. 2009; 43(5/6): 762–783.
  20. 20. Canadian Medical Association. 2018. Shaping the Future of Health and Medicine. https://www.cma.ca. Accessed July 16, 2021.
  21. 21. Aboueid S, Meyer S, Wallace JR, Mahajan S, Chaurasia A. Young Adults’ Perspectives on the Use of Symptom Checkers for Self-Triage and Self-Diagnosis: Qualitative Study. JMIR Public Health Surveill 2021;7(1):e22637 pmid:33404515
  22. 22. Lanza ST, Collins LM, Lemmon DR, et al. PROC LCA: a SAS procedure for latent class analysis. Struct Equ Modeling 2007; 14: 671 694. pmid:19953201
  23. 23. Allison KR, Adlaf EM, Irving HM, et al. The search for healthy schools: a multilevel latent class analysis of schools and their students. Prev. Med. Rep. 2016; pmid:27462531
  24. 24. Simons-Morton B, Haynie D, Liu D, et al. (2016). The Effect of Residence, School Status, Work Status, and Social Influence on the Prevalence of Alcohol Use Among Emerging Adults. JSAD 2016; 77 (1): 121–132.
  25. 25. Drehlich M, Naraine M, Rowe K, et al. Using the technology acceptance model to explore adolescents’ perspectives on combining technologies for physical activitiy promotion within an intervention: usability study. JMIR 2020; 22(3): e15552. pmid:32141834
  26. 26. Shemesh T, Barnoy S. Assessment of the Intention to Use Mobile Health Applications Using a Technology Acceptance Model in an Israeli Adult Population. Telemed J E Health. 2020; 26(9):1141–1149. pmid:31930955
  27. 27. Ceney A, Tolond S, Glowinski A, et al. Accuracy of online symptom checkers and the potential impact on service utilisation. PLoS ONE 2021; 16(7): e0254088. pmid:34265845
  28. 28. World Economic Forum. 2020. Top 10 Emerging Technologies of 2020. http://www3.weforum.org. Accessed July 16, 2021.
  29. 29. Aboueid S, Meyer SB, Wallace JR, et al. Use of symptom checkers for COVID-19-related symptoms among university students: a qualitative study. BMJ Innov. 2021;7:253–260. pmid:34192014