Protocol
Abstract
Background: Psychological distress, particularly symptoms of depression and anxiety (D&A), is highly prevalent among family caregivers of individuals living with cancer, who often assume central roles in care coordination, treatment adherence, symptom monitoring, and emotional support. Rates of distress among caregivers frequently equal or exceed those observed in patients themselves. Despite increased attention to caregivers’ mental health needs, routine distress screening remains limited in oncology care settings. Advances in mobile health technology and artificial intelligence (AI) offer opportunities to address these needs by providing accessible and user-driven tools. The Ellipsis Caregiver Assessment Enhancement (eCARE; Ellipsis Health, Inc) is a speech-based, AI-enabled mobile app designed to screen and monitor symptoms of depression and anxiety. By collecting brief voice recordings and in-app survey data, eCARE offers a scalable approach for integrating caregiver distress monitoring into cancer care.
Objective: This single-arm trial will evaluate the feasibility and acceptability of the eCARE app among family members who are the primary caregivers of patients diagnosed with cancer within the past 5 years. Specifically, the study aims to (1) determine feasibility based on platform completion rates, (2) assess acceptability using validated measures, and (3) identify barriers and facilitators influencing the uptake and sustained use of eCARE.
Methods: In Phase 1, a total of 60 United States–based family caregivers will be recruited from community health clinics, cancer and caregiving advocacy groups, and online postings. Screened and enrolled caregivers will complete 6 eCARE sessions over an 8-week period. Pre- and posttest surveys assess depression, anxiety, caregiving burden, and relational processes. Feasibility will be evaluated based on the proportion of participants who complete at least 66% of weekly assessments, and acceptability will be assessed using the acceptability of intervention measure (AIM). In Phase 2, a total of 20 caregivers will be invited to participate in semi-structured online interviews to explore user experience, including perceived benefits, barriers to use, and preferences for future implementation. Qualitative data will be analyzed thematically to inform tool refinement.
Results: The study has received Institutional Review Board approval from the University of Houston. Participant recruitment and enrollment began in June 2024, with data collection expected to conclude by August 2025. Data analysis will begin in December 2025, with preliminary results anticipated by May 2026.
Conclusions: This study will generate preliminary evidence on the feasibility, acceptability, and utility of a speech-based, AI-enabled smartphone tool for monitoring D&A symptoms among family cancer caregivers. Findings will inform the design of a larger, fully powered trial and guide future implementation of remote psychological distress monitoring strategies in oncology care. By offering a low-burden, caregiver-centered approach, eCARE has the potential to expand access to psychosocial support and facilitate timely identification of needs and coordination of services across cancer care settings.
International Registered Report Identifier (IRRID): DERR1-10.2196/83276
doi:10.2196/83276
Keywords
Introduction
Background
A cancer diagnosis is often associated with psychological challenges affecting patients and the broader support network, including spouses, family members, and friends who assume the role of caregivers [-]. Caregiving entails practical and supportive tasks across the continuum of care: coordinating medical visits, building relationships with clinicians, communicating with the medical team, and contributing to decision-making, adherence to treatment plans, and symptom management [,]. These tasks carry physical, psychosocial, emotional, and financial consequences, rendering cancer caregivers highly vulnerable to stress and mental health problems [,,]. Across studies, family caregivers of patients with cancer often experience greater psychological distress than patients [,,-], and cancer caregiving is rated as more burdensome than other high-intensity caregiving scenarios []. Yet, caregivers’ mental health needs remain insufficiently addressed in routine cancer care, underscoring the need for targeted psychosocial interventions [,].
Caregiver psychological distress is common throughout the illness trajectory, with patterns that differ across diagnosis, active treatment, recurrence, and advanced disease. Between 55% and 95% of caregivers [-] report clinically significant distress or mental health conditions. Compared with the general US population, family caregivers of patients with advanced cancer frequently score below 30% on mental health measures []. Inadequate preparation for the caregiving role compounds distress, contributing to feelings of helplessness and regret as caregivers witness symptom burden [,]. Specific conditions are highly prevalent. Depression ranges from 16% to 67% and tends to increase as prognosis worsens [,-]. Anxiety is even more common; a global meta-analysis estimates 46.5% of family caregivers experience clinically significant anxiety, with rates in advanced disease (~40%-42%) exceeding those observed in patients [,,,]. In some cancer types (eg, head and neck) up to 57% of caregivers report clinically significant anxiety and/or depression [].
Multiple factors contribute to this heightened vulnerability. Caregiver risk factors include being female, younger age, financial strain, limited social support, high burden, low caregiving self-efficacy, and prolonged caregiving hours [,]. Family caregivers are more likely to experience psychological distress when the patient is younger, when the illness progresses and worsens, when the patient has limited awareness of their prognosis, or when the patient’s overall health and daily functioning decline [,,]. Relational and contextual risk factors include being a spousal caregiver, insecure attachment (eg, fear of abandonment or avoidance), and family conflicts, which amplify distress and cause psychological strain among family cancer caregivers [,,].
Despite this evidence, caregiver support remains underresourced and inadequate. Family caregivers frequently report unmet informational and psychological needs [,,,-]. Traditional in-person mental health services are often inaccessible due to cost, time, and logistical barriers, limiting opportunities to address caregivers’ needs []. As a result, family caregivers experience sleep disruption, diminished quality of life, and reduced caregiving capacity, ultimately affecting patient care [,,].
Digital mental health offers a scalable path to support cancer caregivers. Smartphone- and web-based interventions can extend reach, reduce access barriers, and deliver timely, evidence-based care. However, rigorous evaluation in oncology caregiving remains limited. To date, eHealth interventions for family caregivers of individuals living with cancer have been found to reduce depression and modestly improve quality of life versus usual care, supporting the feasibility and potential value of remote delivery for caregiver mental health. Still, effects were small and trials were few, with short follow-up and heterogeneous content, therefore underscoring the need for larger, higher-quality randomized controlled trials [,,-]. Building on this foundation, AI-enabled depression and anxiety (D&A) screening and monitoring apps have strong potential for flexible, real-time support [,,]. Evidence shows that technology-based approaches can (1) improve health and psychological outcomes, (2) expand access to care, (3) be cost-effective, and (4) be delivered in self-paced, tailored formats [,-]. Yet, family members remain excluded from national distress screening mandates, leaving their needs inconsistently identified and addressed in oncology care settings [-]. This evidence emphasizes an urgent need to design, test, and implement family caregiver-specific digital psychosocial interventions that can be integrated into routine cancer care.
Advancements and Gaps in Speech-Based Mental Health Assessment
Technological advancements in automated and AI-enabled speech analysis have significantly improved the ability to detect psychological symptoms [-]. Tools using voice data, particularly acoustic features, now offer a promising complement to traditional validated screening instruments. These tools have shown diagnostic performance comparable to widely used psychometric scales [,] with added advantages including reduced user burden, minimal risk of human error (eg, missing data), and the potential for greater user comfort during verbal self-expression [-].
Nevertheless, several limitations persist. Most tools continue to rely on either acoustic (ie, how something is said) or semantic content (ie, what is said), without combining both for improved precision [,-]. Additionally, many systems have focused solely on depression, with a smaller fraction targeting anxiety and even fewer addressing both concurrently [-]. This gap is particularly concerning for family caregivers of patients with cancer, a population in which anxiety symptoms are both prevalent and frequently co-occur with depressive symptomatology.
This study addresses these gaps by evaluating Ellipsis Caregiver Assessment Enhancement (eCARE; Ellipsis Health, Inc), a next-generation speech-based assessment platform designed to detect both D&A symptom severity using an integrated AI model. Unlike most existing systems, eCARE combines semantic and acoustic data inputs to provide dual assessments, enhancing precision. Additionally, it offers a clinician-facing dashboard to support referrals and care coordination, promoting a person-centered model of supportive care delivery. The Ellipsis Health platform has been technically validated in multiple peer-reviewed publications demonstrating its strong machine learning performance [-].
eCARE App
eCARE is a speech-based smartphone app linked to Ellipsis Health, which is a secure, cloud-based AI infrastructure. The app processes a weekly 90-second audio recording from users who respond to prompted topics, including those related to their caregiving stress and emotional well-being. The system analyzes both the acoustic and semantic characteristics of speech using advanced deep learning techniques, including transfer learning. This approach allows for more sophisticated and accurate identification of users’ speech signals than traditional models that rely on fixed acoustic features such as pitch or volume [,,].
The platform has demonstrated strong performance metrics, with reported area under the curve values of 0.85 for Patient Health Questionnaire-9 item (PHQ-9) and 0.84 for Generalized Anxiety Disorder-7 item (GAD-7) predictions (binary classification with a cutoff score of 10), root-mean-square error values of 4.25 and 4.47, and mean absolute error values of 3.13 and 3.23, respectively []. To date, eCARE remains the only available speech-based screening platform capable of producing concurrent D&A scores for family caregivers of patients with cancer. This dual capability is a critical feature for reducing participant fatigue during long-term monitoring and managing the acute mental health needs of this group.
The eCARE app is available for download on the Apple or the Google Play Store and can be used on iOS and Android mobile devices. After downloading the app on their smartphones, users are able to register in the app with the phone number they provided to the research team. Once a week, users will be prompted to log in to the app and record a 90-second response to a selected topic related to their mental health or caregiving responsibilities. Afterward, users will receive nonclinical feedback on their distress levels. The app also includes longitudinal tracking of responses and mental health crisis hotline information. On the provider side, eCARE features a HIPAA (Health Insurance Portability and Accountability Act)-compliant web portal that visualizes severity of distress trajectories over time, supporting timely mental health referrals and opportunities for direct integration into the cancer care continuum.
Specific Aims
The present study will (1) expand understanding of the psychosocial issues faced by family caregivers of individuals diagnosed with cancer and (2) generate actionable evidence to refine eCARE for responsive delivery in a way that is receptive to preferences for care, format, and optimal timing of screening. Specific Aim 1: establish the feasibility and acceptability of eCARE among family caregivers of patients with cancer in a single-arm prospective cohort study, using platform completion rate (feasibility) and the acceptability of the intervention measure (AIM; acceptability) as indicators. Specific Aim 2: qualitatively evaluate facilitators and barriers to eCARE uptake among a subset of participants (n=20) to inform iterative refinement. Expected outcomes include evidence that a caregiver-driven digital approach can increase the proportion of family caregivers who are screened and monitored, as well as generating concrete recommendations to optimize eCARE for future efficacy testing and implementation in survivorship care.
Methods
Study Design
This single-arm, prospective cohort study will use a mixed methods design to evaluate the feasibility and acceptability of eCARE, a digital AI-enabled tool designed to monitor psychological distress among family caregivers of individuals living with cancer. The study will:
- Track psychological distress, specifically D&A over an 8-week period among a cohort of 60 family cancer caregivers;
- Assess adherence to and perceived acceptability of the eCARE tool;
- Conduct semi-structured qualitative interviews with a subsample of participants (n=20) to explore experiences with eCARE use.
Participants will use eCARE independently, following structured onboarding and usage instructions provided by the research team at baseline. Psychological distress will be monitored using standardized questionnaires and voice-based data collection, with app-based metrics and quantitative survey data supplemented by qualitative feedback ().
| Measures | Measure administration time points and personnel | |||
| T0/Baseline | T1-T6 | T7 | Administration | |
| Depression (PHQ-9a) [] | ✓ | ✓ | ✓ | Qualtrics/ platform datab |
| Anxiety (GAD-7c) [] | ✓ | ✓ | ✓ | Qualtrics/ platform data |
| Quality of Life (CDC 4 itemsd) [] | ✓ | ✓ | Qualtrics | |
| Caregiving Burden (Short Form Zarit Burden Interview) [] | ✓ | ✓ | Qualtrics | |
| Closeness (Inclusion of the Other in the Self Scale) [,] | ✓ | ✓ | Qualtrics | |
| Communication (Social Constraints Scale) [,] | ✓ | ✓ | Qualtrics | |
| Responsiveness (Perceived Partner Responsiveness Scale) [,] | ✓ | ✓ | Qualtrics | |
| Communal Motivation to Care (Partner-Specific Communal Motivation Scale) [,] | ✓ | ✓ | Qualtrics | |
| eCAREe App Use | ✓ | Platform data | ||
| Acceptability of intervention measure (AIM) [] | ✓ | Qualtrics | ||
| User Engagement Scale–Short Form [] | ✓ | Qualtrics | ||
| Qualitative interviews | ✓ | RSf via Zoom | ||
aPHQ-9: Patient Health Questionnaire-9 item.
bPlatform data: eCARE data collection.
cGAD-7: Generalized Anxiety Disorder-7 item.
dCDC 4 items: Centers for Disease Control and Prevention 4-item Healthy Days Measure.
eeCARE: Ellipsis Caregiver Assessment Enhancement.
fRS: research staff.
Participant Eligibility
Participants will be eligible to enroll if they (1) self-identify as the primary caregiver or support person of an individual diagnosed with cancer within the past 5 years, (2) are 18 years of age or older, (3) have access to a smartphone capable of downloading and using the eCARE app, (4) are fluent in English, and (5) are able and willing to provide informed consent. To ensure adequate representation of caregiving experiences, a stratified sampling approach will be used to recruit equal numbers of caregivers who identify as non-Hispanic White (n=15), Black/African American (n=15), Hispanic and Latino (n=15), and Asian American, Native Hawaiian, and Pacific Islander (AANHPI; n=15). Participants will be excluded from the study if they (1) have a cognitive impairment; (2) have a speech impairment; and (3) have a severe mental illness that would impede the ability to provide informed consent or complete study activities. Eligibility will be self-reported through an initial study information survey. Participants will review a list of exclusionary criteria and be asked to self-select out of the study if any listed conditions apply. Eligibility will be verified as necessary via follow-up communication (eg, telephone screening). To minimize fraudulent responses, we will apply minimum time-on-task thresholds, checks for duplicate IPs or devices, and manual pattern review.
Participant Recruitment and Study Procedures
Recruitment Procedures
Recruitment efforts will be conducted in collaboration with the Kelsey Research Foundation and other community partners. A recruitment email will be distributed directly to potential participants via the Kelsey Research Foundation listserv, support group lists, and through oncology social workers. Potential participants may self-refer to the study in one of the following ways: (1) by accessing the study flyer’s URL or scanning the QR code, which will link to an online eligibility screening survey containing study information; (2) by contacting study staff, who will provide additional information and instructions to complete the screening survey; and (3) by visiting the study landing page on the Kelsey Research Foundation website and completing a brief informational survey. With appropriate approval, flyers will also be distributed electronically through community partners and legitimate online platforms. Investigators will engage in additional recruitment at educational and support-focused events for family caregivers, including caregiving walks, conferences, and community events. This may include staffing a vendor table or collaborating with event organizers to distribute study recruitment materials through event newsletters, programs, and digital channels. AANHPI cancer caregivers will also be recruited by leveraging the Collaborative Approach for AANHPI Research and Education (CARE) Registry. The CARE Registry maintains a HIPAA-compliant, institutional review board (IRB)–approved database of AANHPI individuals residing in the United States and US-Affiliated Pacific Islands who have expressed willingness to participate in health-related research. As an established research platform dedicated to reducing disparities in health care studies, CARE provides access to a diverse pool of potential participants. Partnering with this platform allows for a more efficient and targeted recruitment process, ensuring that the study reaches an underrepresented population while maintaining ethical and regulatory compliance. The contact information of the principal investigator, as well as the IRB approval statement and the contact information for the IRB of record, will be included in the informed consent materials of all the electronic surveys and qualitative interviews. Recruitment will span approximately 12-16 months to maximize sample size and ensure sufficient diversity in participant representation.
Enrollment and Participation
Interested individuals who meet initial eligibility criteria will be contacted by study staff to schedule an initial phone screening, after which informed consent will be obtained and baseline measures collected through a pretest electronic survey. Participants will be asked to use the eCARE app weekly for 6 sessions (T1-T6) over an 8-week period. Weekly phone-based reminders will prompt engagement with the platform. If an assessment is missed, the app will automatically issue 2 follow-up notifications within 24-48 hours. Platform usage data will be collected to assess adherence (eg, frequency and duration of app use). A brief posttest assessment (T7) will be conducted approximately 8 weeks after baseline; the survey will include the same instruments presented during the pretest assessment plus the acceptability (AIM questionnaire []) and user engagement questionnaires (User-Engagement Scale-Short Form []).
Qualitative Component
A purposive subsample of participants (n=20; 5 per racial and ethnic group) will be invited to complete a semi-structured qualitative interview exploring facilitators and barriers to eCARE uptake, in addition to recommended improvements and refinement. Interest in participating in this component of the project will be assessed during the phone screening and pretest survey outlined in Phase 1. Eligible family caregivers will be contacted at the end of the intervention by email and those who agree will be invited to complete a 60- to 90-minute Zoom (Zoom Video Communications, Inc) interview with a member of the research team. Serial recruitment will continue until thematic saturation is achieved, defined as no new codes or themes emerging in at least 2 consecutive interviews. This approach follows empirical and methodological recommendations, as many core themes in relatively homogenous qualitative samples are typically identified within the first 12 interviews [-] and sample sizes of 12-20 are generally sufficient for achieving saturation in focused qualitative studies [-]. By exceeding the standard threshold for saturation, the study’s target sample size allows for a more comprehensive exploration of a diverse group of users’ perspectives. Details regarding assessment points and measures are summarized in .
eCARE Intervention
eCARE is an evidence-based artificial intelligence (AI)–enabled D&A screening and monitoring tool using speech data, which analyzes 90 seconds of participant “conversation” on selected topics. After logging in and hitting “start,” participants can “talk to” the app about their mental health by selecting among 3 different prompts. After each recording, users will see a numeric score and a visual interpretation of their results. Then, they will be taken to the next page, where PHQ-9 and GAD-7 will be collected. Caregivers are also presented with mental health and suicidality resources on the last screen of the session.
Participant Safety
An individual safety plan will be implemented for family caregivers of individuals diagnosed with cancer endorsing the suicide item of the PHQ-9 and for those who score above the cutoff scores for anxiety and depression on the selected instruments. In case of elevated distress registered during the use of the eCARE app, as evidenced by scores ≥10 on the GAD-7 and the PHQ-9 instruments, participants will have access to mental health resources both in the app and via separate email within 24 hours. For those endorsing the suicide risk item, the eCARE app provides immediate information to the 988 Suicide and Crisis Lifeline, with the contact number for the helpline and chat option visible at the end of the session. The Suicide & Crisis Helpline offers 24×7 call, text, and chat access to trained crisis counselors who can help people experiencing suicidal ideation, substance use, and/or mental health crisis, or any other kind of emotional distress.
In case of high distress, participants will receive an email about mental health resources. In the case of participants who endorse the suicidality item, the email will contain ad hoc resources such as 988 Suicide & Crisis Lifeline, 211, National Alliance on Mental Illness (NAMI) phone and chat option, as well as 911. These communications will be accompanied by a text message in the app as well to alert the participant of the elevated score and that resources have been shared. Participants can also contact the investigators or let the study team members know about their interest in being referred to these community-based organizations. The online eCARE platform will be monitored weekly; however, the system alerts the investigative team in real time for each missed assessment or for assessments where the suicide risk item is endorsed. Importantly, the eCARE application does not interface with other smartphone applications or services (eg, Apple Health or Android Health) and does not collect or transmit protected health information beyond what is consented to within the study protocol.
Study End Points
The primary feasibility end point will be determined by the number of times each participant completes the eCARE assessment out of a total of 6 possible time points over an 8-week period. Participants who complete the assessment at least 4 times (≥ 66.6% completion rate) will be considered to have met the feasibility threshold. The feasibility study will be deemed successful if at least 60% of enrolled participants (n≥36) reach this threshold. The secondary acceptability end point will be assessed using the AIM, a psychometrically validated implementation science instrument designed to evaluate participants’ perceptions of intervention acceptability. An average AIM score of 4 out of 5 will be interpreted as indicative of acceptable intervention uptake.
Measures
As part of this study, we are collecting a comprehensive set of variables to capture the multifaceted experiences of caregiving.
Sociodemographic Characteristics
Sociodemographic characteristics include participants’ age, sex, gender identity, race and ethnicity, educational attainment, relationship status, employment status, insurance coverage, and household income.
Clinical Characteristics (Care Recipient)
Clinical characteristics of the care recipient—such as their age, sex, cancer diagnosis, cancer stage, and type of treatment—are recorded to contextualize the caregiving experience.
Caregiving Characteristics
To further understand the caregiving context, we will gather data on the length of the caregiving period, the caregiver’s relationship to the care recipient, primary caregiver status, and cohabitation. We will also assess caregiving intensity through the number of hours of care provided per week, the types of tasks performed, and the care recipient’s level of independence in activities of daily living, measured by the Katz Index [].
Outcome Measures
Psychological Distress
Depression is measured by the PHQ-9, a validated, self-administered instrument widely used to screen for, diagnose, and monitor the severity of depressive symptoms []. Respondents rate the frequency with which they have experienced each symptom over the past 2 weeks on a 4-point Likert scale, ranging from 0 (“Not at all”) to 3 (“Nearly every day”), yielding a total score between 0 and 27. Interpretation of PHQ-9 scores follows standard clinical guidelines: 0-4 (minimal depression), 5-9 (mild), 10-14 (moderate), 15-19 (moderately severe), and 20-27 (severe). A total score of 10 or higher is commonly used as a cutoff for identifying clinically significant depressive symptoms. Item 9 specifically assesses suicidal ideation and requires immediate clinical attention if endorsed. The PHQ-9 has demonstrated strong reliability and construct validity across diverse populations and care settings, including oncology. The excellent psychometric property of the PHQ-9 has been reported across different age and racial or ethnic groups, endorsing its utility among patients with cancer []. The GAD-7 is a widely used, self-administered instrument designed to assess the severity of generalized anxiety symptoms over the past 2 weeks. It includes 7 items corresponding to core symptoms of generalized anxiety disorder. Each item is rated on a 4-point Likert scale ranging from 0 (“Not at all”) to 3 (“Nearly every day”), yielding a total score between 0 and 21 []. Interpretation of total scores follows established clinical cut points: 0-4 (minimal anxiety), 5-9 (mild), 10-14 (moderate), and 15-21 (severe). A score of 10 or above is commonly used as a threshold for identifying clinically significant anxiety symptoms, warranting further assessment. The GAD-7 demonstrated excellent internal consistency (α=.92), strong test-retest reliability, good criterion, construct, and convergent validity [,]. These strong psychometric properties have been confirmed across age groups and racial and ethnic backgrounds, including those diagnosed with cancer [-].
Closeness
The Inclusion of Other in the Self (IOS) Scale [,] is a widely used, single-item pictorial measure of interpersonal closeness. Grounded in self-expansion theory [,], the IOS assesses the degree to which individuals perceive their relationship partner as integrated into their sense of self. In the current study, the IOS was used to evaluate perceived emotional closeness between family cancer caregivers and their care recipients. Participants are shown seven pairs of circles labeled “Self” and “Other,” ranging from no overlap (1) to almost complete overlap (7). They are asked to select the pair that best represents their relationship with the care recipient. This visual analog approach provides a rapid and intuitive assessment of relational closeness and has demonstrated validity across diverse populations [,,,]. The IOS has been linked to a variety of relationship and health outcomes and is commonly used in studies of romantic, familial, and community ties []. In this study, caregiver-reported IOS scores provide a snapshot of the emotional bond between caregiver and care recipient, an interpersonal factor that may influence psychological distress, motivation to engage in support tools, and the caregiving experience more broadly.
Caregiving Burden
Caregiving burden is assessed with the Zarit Burden Interview-Short Form (ZBI-12) a 12-item self-report measure used to assess the subjective burden experienced by informal caregivers []. It evaluates key dimensions of caregiver strain, including emotional, physical, and social impacts associated with caregiving responsibilities. Each item is rated on a 5-point Likert scale ranging from 0 (“Never”) to 4 (“Nearly always”), yielding a total score ranging from 0 to 48, with higher scores indicating greater perceived burden. Standard scoring guidelines categorize burden levels as follows: 0-10 (no to mild burden); 10-20 (mild to moderate burden); >20 (high burden). The ZBI-12 has demonstrated strong reliability and validity across diverse caregiving populations, including those caring for individuals with dementia, cancer, and other serious illnesses.
Communication
The Social Constraints Scale (SCS) is a 15-item self-report instrument developed to assess the extent to which individuals perceive their social environment, particularly close others such as spouses or family members, as inhibiting or discouraging the expression of illness-related thoughts and emotions [,]. In the context of cancer, the SCS captures perceived social responses that constrain open communication, such as minimizing concerns, changing the subject, or expressing discomfort when the patient discusses their cancer-related experiences. Each item is rated on a 4-point Likert scale ranging from 1 (“Never”) to 4 (“Often”), with higher total scores (ranging from 15 to 60), indicating greater perceived social constraint. The SCS has demonstrated strong psychometric properties across diverse populations coping with chronic illness.
Perceived Responsiveness
Caregivers’ perceptions of care recipient responsiveness were measured by the 12-item Perceived Partner Responsiveness Scale (PPRS) []. The PPRS measures the degree to which people feel their loved ones are responsive to them. It measures two dimensions of perceived responsiveness in close relationships: (1) understanding (7 items, eg, “My loved one gets the facts right about me;” “My loved one knows me well”) and (2) validation (5 items, eg, “My loved one values and respects the whole package that is the real me;” “My loved one seems interested in what I am thinking and feeling”). Responses were scored on a 9-point Likert scale ranging from 1 (not at all true) to 9 (completely true). A total score is calculated by summing all items [], with scores indicating greater perceptions of others’ responsiveness.
Communal Motivation to Care
Communal motivation was assessed with the 10-item Partner-Specific Communal Motivation Scale []. Caregivers were asked to assess to what extent they were communally motivated to care for the well-being of their care recipients. Responses were scored on a 9-point Likert scale ranging from 1 (not at all) to 9 (extremely). Example items are “Helping my loved one is a high priority for me,” “I would sacrifice very much to help my loved one,” “I would be reluctant to sacrifice for my loved one.” The total score is given by the sum of the items after reversing items 2, 5, and 10, and indicates higher levels of communal motivation.
Quality of Life
The Centers for Disease Control and Prevention (CDC) healthy days measure, commonly referred to as health-related quality of life-4 (HRQOL-4), is a brief, validated instrument designed to assess health-related quality of life in population-based studies and public health surveillance []. The HRQOL-4 includes 4 core items, including self-rated general health, physically unhealthy days, mentally unhealthy days, and physical or mental health interference with daily activities. Each item captures a distinct domain of the HRQOL-4, allowing for both individual and composite assessments of overall well-being. The HRQOL-4 has demonstrated strong reliability and validity across diverse populations [,].
Acceptability of Intervention
The AIM is a pragmatic, 4-item self-report instrument developed to assess stakeholders’ perceptions of the acceptability of a specific intervention or implementation strategy []. The AIM captures the extent to which an intervention is perceived as agreeable, satisfactory, or palatable, with items rated on a 5-point Likert scale ranging from “completely disagree” to “completely agree.” The items include (1) “(Intervention) meets my approval,” (2) “(Intervention) is appealing to me,” (3) “I like (Intervention),” and (4) “I welcome (Intervention).” It has been designed for use across diverse stakeholder groups, including service providers, administrators, and caregivers, and it requires no specialized training for administration or interpretation. While no established cutoff scores exist, higher total scores indicate greater acceptability. The AIM has demonstrated strong psychometric properties, including content and structural validity, reliability, and responsiveness to change [].
User Engagement
The User Engagement Scale (UES-SF) is a 12-item self-report instrument developed to assess user engagement in human-computer interaction contexts []. The short form includes four key subscales, each with 3 items: focused attention, perceived usability, aesthetic appeal, and reward. Participants respond using a 5-point Likert scale ranging from 1 (“Strongly Disagree”) to 5 (“Strongly Agree”). Per scoring instructions, items within the perceived usability subscale are reverse coded. Subscale scores are computed by averaging the responses within each domain, and a total engagement score is calculated by averaging across all 12 items. The UES-SF was designed for high usability, with items written at a fifth-grade reading level and no special training required for administration or interpretation. The instrument has demonstrated sound psychometric properties, including content and structural validity, internal consistency, and responsiveness to user experiences [].
Data Analysis
Quantitative and qualitative data will be analyzed using a mixed methods approach to evaluate the feasibility and acceptability of the eCARE tool among family caregivers of individuals diagnosed with cancer. Descriptive statistics will be used to summarize participants’ sociodemographic, clinical, and caregiving characteristics, including means, SDs, ranges, and 95% CIs. Graphical methods (eg, histograms, boxplots, and scatterplots) will be used to examine variable distributions and identify the need for transformations. For comparisons, we will use ANOVA for continuous variables and chi-square tests for categorical variables. Differences in psychological distress, caregiving burden, relational closeness, perceived responsiveness, and motivation to provide care will also be explored. Feasibility, the primary outcome of Aim 1, will be assessed based on the number of participants completing at least 4 of 6 total weekly eCARE assessments during the 8-week study period. Participants exceeding 6 sessions due to more frequent use will be classified as having met the feasibility threshold, consistent with those completing at least 4 of 6 sessions. The study will be considered feasible if 60% or more of enrolled participants (≥36 of 60) achieve this end point. Acceptability, the secondary outcome, will be assessed using the AIM []. The mean and SD of AIM scores will be calculated, with an average score of ≥4.0 (on a 5-point scale) interpreted as evidence of acceptability. Exploratory latent growth curve modeling will be used to examine changes in depression and anxiety symptoms over time (measured by PHQ-9 and GAD-7). These analyses are considered exploratory due to the limited statistical power associated with the sample size (n=60).
For Aim 2, qualitative data from semi-structured interviews with a purposive subsample of 20 family caregivers (including both completers and noncompleters) will be analyzed using thematic content analysis in ATLAS.ti []. Interview transcripts will be transcribed verbatim and independently coded by trained research staff. An initial coding scheme will be developed inductively and applied iteratively. Two independent coders will review all transcripts, with discrepancies resolved through discussion until consensus is achieved []. Axial coding and thematic mapping will be used to identify higher-order categories related to eCARE’s perceived strengths and limitations, usability, relevance, and reported facilitators or barriers to engagement. Analysis will continue until thematic saturation is reached, defined as the point at which no new codes or themes emerge [,,].
Ethical Considerations
This study was reviewed and approved by the IRB at the University of Houston (Protocol# STUDY00003186) and adheres to the ethical standards outlined in the Declaration of Helsinki. All study procedures will be conducted in accordance with the approved protocol and applicable institutional requirements for human subjects’ research.
Participants will provide informed consent prior to initiating any study procedures. Consent will be documented in writing at enrollment either via Qualtrics e-signature functionality or by signing a printed consent form when enrollment occurs in person. For the optional qualitative interview component, participants will complete an additional oral consent procedure prior to the start of the interview, and the study team will request a waiver of written documentation for that interview-specific consent, as applicable.
Participant privacy and confidentiality will be protected through multiple safeguards. Direct identifiers (eg, name, email, phone) will be collected solely for recruitment, scheduling, and compensation purposes and will be stored separately from research data, linked only via a study ID. App-based and survey data will be maintained on encrypted, password-protected platforms with access restricted to authorized study personnel; downloaded data will be stored on password-protected, encrypted university devices. While participants’ interest in the qualitative component will be captured as part of the informed consent process, oral informed consent will be obtained again prior to the commencement of the qualitative interviews. Qualitative interviews will be audio and/or video-recorded only with participant permission; recordings will be transcribed and then deleted following transcription. Interview transcripts will be labeled with study IDs. Only approved members of the research team will have access to identifiable information, and the master linkage file will be maintained by the principal investigator and destroyed at study completion per protocol.
Participants will receive incentives in the form of electronic Amazon gift cards. To minimize the risk of undue inducement, gift cards will be distributed at three predetermined time points, including US $30 upon completion of a baseline consent and pretest assessment survey, US $30 at the midpoint assessment (T4) contingent on at least one completed check-in with eCARE, and US $30 upon completion of the posttest assessment (T7). Participants who complete the qualitative interview will receive an additional US $50 e-gift card.
Results
Participant recruitment and enrollment began in June 2024, with data collection expected to conclude in August 2025. Data analysis will commence in December 2025, coinciding with the PI’s transition to a new institution, with preliminary findings anticipated by April-May 2026. presents the baseline sociodemographic, clinical, and caregiving-related characteristics of enrolled participants, while outlines the primary feasibility outcomes and secondary outcomes of the pilot study ().
| Category and variable | Measurement | |
| Sociodemographic characteristics | ||
| Age | Years | |
| Sex | Male, female, or other | |
| Gender | Self-reported | |
| Race and ethnicity | US Census categories; self-reported | |
| Education | Highest level completed | |
| Relationship status | Married, partnered, single, etc | |
| Employment | Full-time, part-time, unemployed, or not applicable | |
| Insurance coverage | Private, public, other or uninsured | |
| Income | Income brackets | |
| Clinical characteristics of care recipient | ||
| Age | Years | |
| Patient sex | Male, female, or other | |
| Cancer diagnosis | As reported by family caregivers | |
| Cancer treatment | Chemotherapy, radiation, surgery, etc | |
| Cancer stage | I-IV or not applicable | |
| Caregiving characteristics | ||
| Length of caregiving period | Months or years | |
| Relationship with care-recipient | Categories based on self-report | |
| Primary caregiver status | Yes or No | |
| Cohabitation with care recipient | Yes or No | |
| Caregiving intensity (hours/week) | Total hours per week | |
| Type of tasks provided | Household, personal, practical, and emotional | |
| Care recipient ADLa Independence (Katz Index) | Yes or No; score range 0-6 | |
aADL: activities of daily living.
| Outcome type and measure | Definition/Criteria | Success threshold | |
| Primary outcome | |||
| Feasibility end point (eCAREa completion rate) |
|
| |
| Secondary outcome | |||
| Acceptability of Intervention Measure (AIM) |
|
| |
aeCARE: Ellipsis Caregiver Assessment Enhancement.
Discussion
Principal Findings
Family caregivers play a central role in cancer care, assuming extensive practical, emotional, and coordination responsibilities that contribute to substantial psychological, physical, and financial strain [,,]. Across cancer types and illness stages, caregivers frequently experience clinically significant anxiety and depression, often at rates exceeding those observed in patients themselves, yet their mental health needs remain insufficiently addressed [,,-,]. This single-arm, prospective pilot study is among the first to evaluate the feasibility and acceptability of using artificial intelligence and speech-based analytics through a mobile app (eCARE) to monitor the severity of depression and anxiety among a sample of community-based family caregivers of individuals living with cancer. Traditional methods, such as clinic-based assessments or paper forms, often present challenges with high participant burden and low engagement, which can compromise data accuracy and limit opportunities for timely intervention. It is essential to explore innovative strategies offering more flexible, scalable, and caregiver-centered approaches to distress monitoring. To this end, we hypothesize that (1) eCARE will be feasible and acceptable, and (2) that participants will provide in-depth information related to barriers and facilitators influencing its uptake, in addition to feedback for future refinement and implementation.
Prior research has demonstrated that eHealth interventions for cancer caregivers can reduce depressive symptoms and modestly improve quality of life; however, existing studies are few, heterogeneous, and often limited by short follow-up periods and narrow outcome focus [,,,,,]. Moreover, most digital interventions rely on self-report questionnaires, which can introduce response burden and missing data. By requiring a brief speech sample to generate predictions of depression and anxiety, eCARE represents a minimally burdensome pathway for mental health monitoring. The mobile application integrates natural language processing to analyze spoken words and acoustic modeling to assess vocal features, leveraging an accessible, multimodal approach well-suited for smartphones [,,,,]. As Schoenberg [] highlighted, digital tools that are frictionless and embedded in daily life are more likely to be adopted and sustained in real-world environments.
Findings from this pilot study will provide preliminary data on key indicators of feasibility (eg, completion rates) and acceptability (eg, satisfaction ratings). This information will be instrumental in helping to determine whether a speech-based tool can be successfully integrated into the complex caregiving context, where time constraints, emotional burden, and competing demands often limit participation in mental health initiatives. To further this goal, the study also incorporates qualitative feedback, allowing for a user-informed refinement of the application’s content, usability, and perceived value. While eCARE does not currently involve provider feedback loops, future iterations may benefit from exploring how clinicians can access and respond to data trends emerging from the app, potentially enhancing care coordination and timely referral to supportive services.
Should eCARE demonstrate feasibility and acceptability, these findings will directly inform the design of a larger, multisite randomized controlled trial. Such a trial would be designed to robustly compare eCARE with established screening methods, like the National Comprehensive Cancer Network (NCCN) Distress Thermometer or standard electronic symptom checklists, and to evaluate its efficacy on family caregivers’ mental health.
Finally, since symptoms of depression and anxiety are among the most prominent predictors of quality of life and long-term health outcomes, effective and early identification is critical. If validated, speech-based assessments like eCARE could serve as a foundation for future interventions aimed at improving caregiver resilience, reducing emotional burden, and ultimately enhancing the caregiving experience.
Limitations
This study has several limitations that should be considered. First, as a single-arm feasibility and acceptability study, it is not designed to evaluate the efficacy of eCARE in reducing depression or anxiety symptoms, nor to support causal inferences. Second, generalizability may be limited, as participation requires access to a smartphone and a minimum level of comfort with mobile technology, potentially excluding family caregivers with lower digital literacy or limited access to digital resources. Third, self-selection bias may be present, as caregivers who choose to enroll may be experiencing higher distress, may be more motivated, or may be more receptive to digital mental health tools, which could inflate estimates of feasibility and acceptability. Finally, the relatively short monitoring period affects the ability to elaborate on sustained engagement and longer-term acceptability, in addition to barriers and facilitators influencing eCARE uptake over time.
Conclusions
This study will generate foundational evidence on the feasibility, acceptability, and perceived utility of a mobile, speech-based, AI-enabled tool for monitoring depression and anxiety among family caregivers of individuals living with cancer. Findings will inform the design of a fully powered clinical trial and guide the optimization of remote mental health screening approaches tailored to the caregiving experience. As a scalable, low-cost, and low-burden intervention, eCARE has the potential to support timely identification of psychological distress, enhance access to supportive care services, and improve coordination of psychosocial care across oncology settings. Beyond cancer caregiving, the underlying model holds promise for adaptation to other chronic or life-limiting conditions and to non–English-speaking populations, pending appropriate validation. Future studies should compare eCARE with established screening tools and examine long-term clinical outcomes, user engagement, and effects on both caregiver and patient well-being, while also evaluating strategies for integrating AI-derived screening results into clinical workflows without increasing provider burden. Dissemination of findings will include sharing results with academic, clinical, and community stakeholders to support continued refinement and implementation of digital mental health solutions for caregivers.
Acknowledgments
We would like to thank the caregivers who will participate in this study for sharing their time and experiences. We are also grateful to our clinical and community partners for their support in recruitment and outreach: our deepest thanks to the Kelsey Research Foundation, Family Caregiver Alliance, the Leukemia and Lymphoma Society, the Collaborative Approach for AANHPI Research and Education Registry, and Friend for Life Cancer Support Network.
The authors declare the use of generative artificial intelligence (GAI) in the writing process. According to the GAIDeT taxonomy (2025), the following tasks were delegated to GAI tools under full human supervision: Proofreading and editing of the revised manuscript. Responsibility for the final manuscript lies entirely with the authors. GAI tools are not listed as authors and do not bear responsibility for the final outcomes.
Data Availability
The datasets generated or analyzed during this study will be available from the corresponding author on reasonable request.
Funding
This study was supported by a SEED Grant from the University of Houston, awarded to CA. The content is solely the responsibility of the authors and does not necessarily represent the official views of the University of Houston.
Authors' Contributions
Conceptualization: CA, MA, ASA
Methodology: CA, TN, MA, ASA
Investigation: CA, MA, TN, AB, SA
Resources: MA, TN
Software: MA, TN
Project administration: CA
Writing – original draft: CA, AB, IKM
Writing – review & editing: All authors
Conflicts of Interest
MA and TN disclose a conflict of interest as employees and shareholders of Ellipsis Health. All other authors declare no conflicts of interest.
CONSORT checklist.
PDF File (Adobe PDF File), 281 KBPeer-review report from the University of Houston SEED Grant Review Committee.
PDF File (Adobe PDF File), 176 KBReferences
- Chow PI, Showalter SL, Gerber MS, Kennedy E, Brenin DR, Schroen AT, et al. Use of mental health apps by breast cancer patients and their caregivers in the United States: protocol for a pilot pre-post study. JMIR Res Protoc. 2019;8(1):e11452. [FREE Full text] [CrossRef] [Medline]
- Ochoa CY, Buchanan Lunsford N, Lee Smith J. Impact of informal cancer caregiving across the cancer experience: a systematic literature review of quality of life. Palliat Support Care. 2020;18(2):220-240. [FREE Full text] [CrossRef] [Medline]
- Girgis A, Lambert S. Cost of informal caregiving in cancer care. Cancer Forum. 2017;41(2):16-22. [FREE Full text]
- Taylor J, Fradgley E, Clinton-McHarg T, Byrnes E, Paul C. What are the sources of distress in a range of cancer caregivers? A qualitative study. Support Care Cancer. 2021;29(5):2443-2453. [CrossRef] [Medline]
- Sklenarova H, Krümpelmann A, Haun MW, Friederich H, Huber J, Thomas M, et al. When do we need to care about the caregiver? Supportive care needs, anxiety, and depression among informal caregivers of patients with cancer and cancer survivors. Cancer. 2015;121(9):1513-1519. [FREE Full text] [CrossRef] [Medline]
- Marzorati C, Renzi C, Russell-Edu SW, Pravettoni G. Telemedicine use among caregivers of cancer patients: systematic review. J Med Internet Res. 2018;20(6):e223. [FREE Full text] [CrossRef] [Medline]
- Grunfeld E, Coyle D, Whelan T, Clinch J, Reyno L, Earle CC, et al. Family caregiver burden: results of a longitudinal study of breast cancer patients and their principal caregivers. CMAJ. 2004;170(12):1795-1801. [FREE Full text] [CrossRef] [Medline]
- Nipp RD, El-Jawahri A, Fishbein JN, Gallagher ER, Stagl JM, Park ER, et al. Factors associated with depression and anxiety symptoms in family caregivers of patients with incurable cancer. Ann Oncol. 2016;27(8):1607-1612. [FREE Full text] [CrossRef] [Medline]
- Pan Y, Lin Y. Systematic review and meta-analysis of prevalence of depression among caregivers of cancer patients. Front Psychiatry. 2022;13:817936. [FREE Full text] [CrossRef] [Medline]
- Cochrane A, Reid O, Woods S, Gallagher P, Dunne S. Variables associated with distress amongst informal caregivers of people with lung cancer: a systematic review of the literature. Psychooncology. 2021;30(8):1246-1261. [CrossRef] [Medline]
- Lambert SD, Harrison JD, Smith E, Bonevski B, Carey M, Lawsin C, et al. The unmet needs of partners and caregivers of adults diagnosed with cancer: a systematic review. BMJ Support Palliat Care. 2012;2(3):224-230. [CrossRef] [Medline]
- Cheng Q, Xu B, Ng MS, Duan Y, So WK. Effectiveness of psychoeducational interventions among caregivers of patients with cancer: a systematic review and meta-analysis. Int J Nurs Stud. 2022;127:104162. [CrossRef] [Medline]
- Dave R, Friedman S, Miller-Sonet E, Moore T, Peterson E, Fawzy Doran J, et al. Identifying and addressing the needs of caregivers of patients with cancer: evidence on interventions and the role of patient advocacy groups. Future Oncol. 2024;20(33):2589-2602. [FREE Full text] [CrossRef] [Medline]
- Bergerot CD, Bergerot PG, Philip EJ, Ferrari R, Peixoto RM, Crane TE, et al. Enhancing cancer supportive care: integrating psychosocial support, nutrition, and physical activity using telehealth solutions. JCO Glob Oncol. 2024;10:e2400333. [CrossRef] [Medline]
- Oechsle K, Ullrich A, Marx G, Benze G, Heine J, Dickel L, et al. Psychological burden in family caregivers of patients with advanced cancer at initiation of specialist inpatient palliative care. BMC Palliat Care. 2019;18(1):102. [FREE Full text] [CrossRef] [Medline]
- Fujinami R, Sun V, Zachariah F, Uman G, Grant M, Ferrell B. Family caregivers' distress levels related to quality of life, burden, and preparedness. Psychooncology. 2015;24(1):54-62. [FREE Full text] [CrossRef] [Medline]
- Halkett GKB, Lobb EA, Shaw T, Sinclair MM, Miller L, Hovey E, et al. Distress and psychological morbidity do not reduce over time in carers of patients with high-grade glioma. Support Care Cancer. 2017;25(3):887-893. [CrossRef] [Medline]
- Shaffer KM, Jacobs JM, Nipp RD, Carr A, Jackson VA, Park ER, et al. Mental and physical health correlates among family caregivers of patients with newly-diagnosed incurable cancer: a hierarchical linear regression analysis. Support Care Cancer. 2017;25(3):965-971. [FREE Full text] [CrossRef] [Medline]
- El-Jawahri A, Greer JA, Park ER, Jackson VA, Kamdar M, Rinaldi SP, et al. Psychological distress in bereaved caregivers of patients with advanced cancer. J Pain Symptom Manage. 2021;61(3):488-494. [FREE Full text] [CrossRef] [Medline]
- Kim Y, Carver CS, Spiegel D, Mitchell H, Cannady RS. Role of family caregivers' self-perceived preparedness for the death of the cancer patient in long-term adjustment to bereavement. Psychooncology. 2017;26(4):484-492. [CrossRef] [Medline]
- Fasse L, Flahault C, Brédart A, Dolbeault S, Sultan S. Describing and understanding depression in spouses of cancer patients in palliative phase. Psychooncology. 2015;24(9):1131-1137. [CrossRef] [Medline]
- Geng H, Chuang D, Yang F, Yang Y, Liu W, Liu L, et al. Prevalence and determinants of depression in caregivers of cancer patients: a systematic review and meta-analysis. Medicine (Baltimore). 2018;97(39):e11863. [FREE Full text] [CrossRef] [Medline]
- Trevino KM, Prigerson HG, Maciejewski PK. Advanced cancer caregiving as a risk for major depressive episodes and generalized anxiety disorder. Psychooncology. 2018;27(1):243-249. [FREE Full text] [CrossRef] [Medline]
- Alam S, Hannon B, Zimmermann C. Palliative care for family caregivers. J Clin Oncol. 2020;38(9):926-936. [CrossRef] [Medline]
- Bedaso A, Dejenu G, Duko B. Depression among caregivers of cancer patients: updated systematic review and meta-analysis. Psychooncology. 2022;31(11):1809-1820. [FREE Full text] [CrossRef] [Medline]
- Molassiotis A, Wang M. Understanding and supporting informal cancer caregivers. Curr Treat Options Oncol. 2022;23(4):494-513. [FREE Full text] [CrossRef] [Medline]
- Benyo S, Phan C, Goyal N. Health and well-being needs among head and neck cancer caregivers - a systematic review. Ann Otol Rhinol Laryngol. 2023;132(4):449-459. [FREE Full text] [CrossRef] [Medline]
- Wang T, Molassiotis A, Chung BPM, Tan J. Unmet care needs of advanced cancer patients and their informal caregivers: a systematic review. BMC Palliat Care. 2018;17(1):96. [FREE Full text] [CrossRef] [Medline]
- Christophe V, Anota A, Vanlemmens L, Cortot A, Ceban T, Piessen G, et al. Unmet supportive care needs of caregivers according to medical settings of cancer patients: a cross-sectional study. Support Care Cancer. 2022;30(11):9411-9419. [CrossRef] [Medline]
- Wang AW, Kim Y, Ting A, Lam WWT, Lambert SD. Healthcare professionals' perspectives on the unmet needs of cancer patients and family caregivers: global psycho-oncology investigation. Support Care Cancer. 2022;31(1):36. [FREE Full text] [CrossRef] [Medline]
- Hart NH, Crawford-Williams F, Crichton M, Yee J, Smith TJ, Koczwara B, et al. Unmet supportive care needs of people with advanced cancer and their caregivers: a systematic scoping review. Crit Rev Oncol Hematol. 2022;176:103728. [CrossRef] [Medline]
- Ullrich A, Marx G, Bergelt C, Benze G, Zhang Y, Wowretzko F, et al. Supportive care needs and service use during palliative care in family caregivers of patients with advanced cancer: a prospective longitudinal study. Support Care Cancer. 2021;29(3):1303-1315. [FREE Full text] [CrossRef] [Medline]
- Zhong C, Luo X, Tan M, Chi J, Guo B, Tang J, et al. Digital health interventions to improve mental health in patients with cancer: umbrella review. J Med Internet Res. 2025;27:e69621. [FREE Full text] [CrossRef] [Medline]
- Su Z, Li X, McDonnell D, Fernandez AA, Flores BE, Wang J. Technology-based interventions for cancer caregivers: concept analysis. JMIR Cancer. 2021;7(4):e22140. [FREE Full text] [CrossRef] [Medline]
- Li Y, Li J, Zhang Y, Ding Y, Hu X. The effectiveness of e-Health interventions on caregiver burden, depression, and quality of life in informal caregivers of patients with cancer: a systematic review and meta-analysis of randomized controlled trials. Int J Nurs Stud. 2022;127:104179. [CrossRef] [Medline]
- Zhang A, Kamat A, Acquati C, Aratow M, Kim JS, DuVall AS, et al. Evaluating the feasibility and acceptability of an artificial-intelligence-enabled and speech-based distress screening mobile app for adolescents and young adults diagnosed with cancer: a study protocol. Cancers (Basel). 2022;14(4):104179. [FREE Full text] [CrossRef] [Medline]
- Siegler AJ, Knox J, Bauermeister JA, Golinkoff J, Hightow-Weidman L, Scott H. Mobile app development in health research: pitfalls and solutions. Mhealth. 2021;7:32. [FREE Full text] [CrossRef] [Medline]
- Klagholz SD, Ross A, Wehrlen L, Bedoya SZ, Wiener L, Bevans MF. Assessing the feasibility of an electronic patient-reported outcome (ePRO) collection system in caregivers of cancer patients. Psychooncology. 2018;27(4):1350-1352. [FREE Full text] [CrossRef] [Medline]
- Shin JY, Kang TI, Noll RB, Choi SW. Supporting caregivers of patients with cancer: a summary of technology-mediated interventions and future directions. Am Soc Clin Oncol Educ Book. 2018;38:838-849. [FREE Full text] [CrossRef] [Medline]
- Darley A, Coughlan B, Furlong E. People with cancer and their family caregivers' personal experience of using supportive eHealth technology: a narrative review. Eur J Oncol Nurs. 2021;54:102030. [FREE Full text] [CrossRef] [Medline]
- Heynsbergh N, Heckel L, Botti M, Livingston PM. Feasibility, useability and acceptability of technology-based interventions for informal cancer carers: a systematic review. BMC Cancer. 2018;18(1):244. [FREE Full text] [CrossRef] [Medline]
- Lazenby M, Ercolano E, Grant M, Holland JC, Jacobsen PB, McCorkle R. Supporting commission on cancer-mandated psychosocial distress screening with implementation strategies. J Oncol Pract. 2015;11(3):e413-e420. [FREE Full text] [CrossRef] [Medline]
- Corveleyn A, Fann JR, Gray TF, Samolovitch S, Vanderlan J. Addressing mental health in cancer care: optimizing interdisciplinary psychosocial support. J Natl Compr Canc Netw. 2025;23(Supplement):e255002. [CrossRef]
- Smith SK, Loscalzo M, Mayer C, Rosenstein DL. Best practices in oncology distress management: beyond the screen. Am Soc Clin Oncol Educ Book. 2018;38:813-821. [FREE Full text] [CrossRef] [Medline]
- 2020 standards and resources. American College of Surgeons. 2023. URL: https://www.facs.org/quality-programs/cancer-programs/commission-on-cancer/standards-and-resources/2020/ [accessed 2026-01-16]
- Mundt JC, Snyder PJ, Cannizzaro MS, Chappie K, Geralts DS. Voice acoustic measures of depression severity and treatment response collected via interactive voice response (IVR) technology. J Neurolinguistics. 2007;20(1):50-64. [FREE Full text] [CrossRef] [Medline]
- Hashim NW, Wilkes M, Salomon R, Meggs J, France DJ. Evaluation of voice acoustics as predictors of clinical depression scores. J Voice. 2017;31(2):256.e1-256.e6. [CrossRef] [Medline]
- Galili L, Amir O, Gilboa-Schechtman E. Acoustic properties of dominance and request utterances in social anxiety. J Soc Clin Psychol. 2013;32(6):651-673. [CrossRef]
- Teferra BG, Rueda A, Pang H, Valenzano R, Samavi R, Krishnan S, et al. Screening for depression using natural language processing: literature review. Interact J Med Res. 2024;13:e55067. [FREE Full text] [CrossRef] [Medline]
- Faurholt-Jepsen M, Busk J, Frost M, Vinberg M, Christensen EM, Winther O, et al. Voice analysis as an objective state marker in bipolar disorder. Transl Psychiatry. 2016;6(7):e856. [FREE Full text] [CrossRef] [Medline]
- Argolo F, Magnavita G, Mota NB, Ziebold C, Mabunda D, Pan PM, et al. Lowering costs for large-scale screening in psychosis: a systematic review and meta-analysis of performance and value of information for speech-based psychiatric evaluation. Braz J Psychiatry. 2020;42(6):673-686. [FREE Full text] [CrossRef] [Medline]
- Trevino AC, Quatieri TF, Malyska N. Phonologically-based biomarkers for major depressive disorder. EURASIP J Adv Signal Process. 2011;2011(1):42. [CrossRef]
- Cummins N, Scherer S, Krajewski J, Schnieder S, Epps J, Quatieri T. A review of depression and suicide risk assessment using speech analysis. Speech Commun. 2015;71:10-49. [FREE Full text] [CrossRef]
- Muaremi A, Gravenhorst F, Grünerbl A, Arnrich B, Tröster G. Assessing bipolar episodes using speech cues derived from phone calls. 2014. Presented at: 4th International Symposium, MindCare 2014; May 8-9:103-114; Tokyo, Japan. [CrossRef]
- Low DM, Bentley KH, Ghosh SS. Automated assessment of psychiatric disorders using speech: a systematic review. Laryngoscope Investig Otolaryngol. 2020;5(1):96-116. [FREE Full text] [CrossRef] [Medline]
- Espinola CW, Gomes JC, Pereira JMS, dos Santos WP. Detection of major depressive disorder using vocal acoustic analysis and machine learning. Res Biomed Eng. 2020;37:53-64. [CrossRef]
- Rutowski T, Shriberg E, Harati A, Lu Y, Chlebek P, Oliveira R. Depression and anxiety prediction using deep language models and transfer learning. IEEE; 2020. Presented at: 7th International Conference on Behavioural and Social Computing (BESC); November 05-07:1-6; Bournemouth, United Kingdom. [CrossRef]
- Harati A, Shriberg E, Rutowski T, Chlebek P, Lu Y, Oliveira R. Speech-based depression prediction using encoder-weight-only transfer learning and a large corpus. IEEE; 2021. Presented at: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); June 06-11:7273-7277; Toronto, ON. [CrossRef]
- Ravi V, Wang J, Flint J, Alwan A. Enhancing accuracy and privacy in speech-based depression detection through speaker disentanglement. Comput Speech Lang. 2024;86:101605. [FREE Full text] [CrossRef] [Medline]
- Rutowski T, Harati A, Lu Y, Shriberg E. Optimizing speech-input length for speaker-independent depression classification. Proc Interspeech. 2019:3023-3027. [CrossRef]
- Chlebek P, Shriberg E, Lu Y, Rutowski T, Harati A, Oliveira R. Comparing speech recognition services for HCI applications in behavioral health. 2020. Presented at: UbiComp/ISWC '20: 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and 2020 ACM International Symposium on Wearable Computers; 2020 September 12 - 17:483-487; Virtual Event Mexico. [CrossRef]
- Karlin B, Henry D, Anderson R, Cieri S, Aratow M, Shriberg E, et al. Digital phenotyping for detecting depression severity in a large payor-provider system: retrospective study of speech and language model performance. JMIR AI. 2025;4:e69149. [FREE Full text] [CrossRef] [Medline]
- Zhao Z, Bao Z, Zhang Z, Deng J, Cummins N, Wang H, et al. Automatic assessment of depression from speech via a hierarchical attention transfer network and attention autoencoders. IEEE J Sel Top Signal Process. 2020;14(2):423-434. [CrossRef]
- Kroenke K, Spitzer RL, Williams JB. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med. 2001;16(9):606-613. [FREE Full text] [CrossRef] [Medline]
- Spitzer RL, Kroenke K, Williams JBW, Löwe B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch Intern Med. 2006;166(10):1092-1097. [CrossRef] [Medline]
- Moriarty DG, Zack MM, Kobau R. The centers for disease control and prevention's healthy days measures - population tracking of perceived physical and mental health over time. Health Qual Life Outcomes. 2003;1:37. [FREE Full text] [CrossRef] [Medline]
- Bédard M, Molloy D, Squire L, Dubois S, Lever J, O'Donnell M. The zarit burden interview: a new short version and screening version. Gerontologist. 2001;41(5):652-657. [CrossRef] [Medline]
- Aron A, Aron EN, Smollan D. Inclusion of other in the self scale and the structure of interpersonal closeness. J Pers Soc Psychol. 1992;63(4):596-612. [CrossRef]
- Helgeson VS, Van Vleet M. Short Report: Inclusion of other in the self scale: an adaptation and exploration in a diverse community sample. J Soc Pers Relat. 2019;36(11-12):4048-4056. [FREE Full text] [CrossRef] [Medline]
- Lepore SJ, Silver RC, Wortman CB, Wayment HA. Social constraints, intrusive thoughts, and depressive symptoms among bereaved mothers. J Pers Soc Psychol. 1996;70(2):271-282. [CrossRef] [Medline]
- Lepore S, Ituarte P. Optimism about cancer enhances mood by reducing negative social relations. Cancer Research Therapy and Control. 1999;8:165-174.
- Reis HT, Crasta D, Rogge RD, Maniaci MR, Carmichael CL. Perceived partner responsiveness scale (PPRS). In: Worthington DL, Bodie GD, editors. The Sourcebook of Listening Research. New Jersey. Wiley; 2017.
- Reis HT, Lee KY, O'Keefe SD, Clark MS. Perceived partner responsiveness promotes intellectual humility. Journal of Experimental Social Psychology. 2018;79:21-33. [CrossRef]
- Lemay EP, Neal AM. The wishful memory of interpersonal responsiveness. J Pers Soc Psychol. 2013;104(4):653-672. [CrossRef] [Medline]
- Le BM, Impett EA, Lemay EP, Muise A, Tskhay KO. Communal motivation and well-being in interpersonal relationships: an integrative review and meta-analysis. Psychol Bull. 2018;144(1):1-25. [CrossRef] [Medline]
- Weiner BJ, Lewis CC, Stanick C, Powell BJ, Dorsey CN, Clary AS, et al. Psychometric assessment of three newly developed implementation outcome measures. Implement Sci. 2017;12(1):108. [FREE Full text] [CrossRef] [Medline]
- O’Brien H, Cairns P, Hall M. A practical approach to measuring user engagement with the refined user engagement scale (UES) and new UES short form. Int J Hum Comput Stud. 2018;112:28-39. [FREE Full text] [CrossRef]
- Kerr C, Nixon A, Wild D. Assessing and demonstrating data saturation in qualitative inquiry supporting patient-reported outcomes research. Expert Rev Pharmacoecon Outcomes Res. 2010;10(3):269-281. [CrossRef] [Medline]
- Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual Res Sport Exerc Health. 2019;13(2):201-216. [CrossRef]
- Guest G, Bunce A, Johnson L. How many interviews are enough?: an experiment with data saturation and variability. Field Methods. 2006;18(1):59-82. [FREE Full text] [CrossRef]
- Hennink MM, Kaiser BN, Marconi VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res. 2017;27(4):591-608. [FREE Full text] [CrossRef] [Medline]
- Guest G, Namey E, Chen M. A simple method to assess and report thematic saturation in qualitative research. PLoS One. 2020;15(5):e0232076. [FREE Full text] [CrossRef] [Medline]
- Hennink M, Kaiser BN. Sample sizes for saturation in qualitative research: a systematic review of empirical tests. Soc Sci Med. 2022;292:114523. [FREE Full text] [CrossRef] [Medline]
- Katz S. Assessing self-maintenance: activities of daily living, mobility, and instrumental activities of daily living. J Am Geriatr Soc. 1983;31(12):721-727. [CrossRef] [Medline]
- Levis B, Benedetti A, Thombs BD, DEPRESsion Screening Data (DEPRESSD) Collaboration. Accuracy of patient health questionnaire-9 (PHQ-9) for screening to detect major depression: individual participant data meta-analysis. BMJ. 2019;365:l1476. [FREE Full text] [CrossRef] [Medline]
- Löwe B, Decker O, Müller S, Brähler E, Schellberg D, Herzog W, et al. Validation and standardization of the generalized anxiety disorder screener (GAD-7) in the general population. Med Care. 2008;46(3):266-274. [CrossRef] [Medline]
- Rutter LA, Brown TA. Psychometric properties of the generalized anxiety disorder scale-7 (GAD-7) in outpatients with anxiety and mood disorders. J Psychopathol Behav Assess. 2017;39(1):140-146. [FREE Full text] [CrossRef] [Medline]
- Lee B, Kim YE. The psychometric properties of the generalized anxiety disorder scale (GAD-7) among Korean university students. Psychiatry Clin Psychopharmacol. 2019;29(4):864-871. [CrossRef]
- Johnson SU, Ulvenes PG, Øktedalen T, Hoffart A. Psychometric properties of the general anxiety disorder 7-item (GAD-7) scale in a heterogeneous psychiatric sample. Front Psychol. 2019;10:1713. [FREE Full text] [CrossRef] [Medline]
- Aron A, McLaughlin-Volpe T, Mashek D, Lewandowski G, Wright S, Aron E. Including others in the self. Eur Rev Soc Psychol. 2004;15(1):101-132. [CrossRef]
- Aron A, Aron EN, Tudor M, Nelson G. Close relationships as including other in the self. J Pers Soc Psychol. 1991;60(2):241-253. [CrossRef]
- Aron A, Lewandowski JG, Mashek D, Aron E. The self-expansion model of motivation and cognition in close relationships. In: Simpson JA, Campbell L, editors. The Oxford Handbook of Close Relationships, Oxford Library of Psychology. England. Oxford University Press; 2013:90-115.
- Saita E, Acquati C, Kayser K. Coping with early stage breast cancer: examining the influence of personality traits and interpersonal closeness. Front Psychol. 2015;6:88. [FREE Full text] [CrossRef] [Medline]
- Mashek D, Cannaday LW, Tangney JP. Inclusion of community in self scale: a single‐item pictorial measure of community connectedness. J Community Psychol. 2007;35(2):257-275. [CrossRef]
- Dumas SE, Dongchung TY, Sanderson ML, Bartley K, Levanon Seligson A. A comparison of the four healthy days measures (HRQOL-4) with a single measure of self-rated general health in a population-based health survey in New York City. Health Qual Life Outcomes. 2020;18(1):315. [FREE Full text] [CrossRef] [Medline]
- ATLAS.ti (Version 23). ATLAS.ti. Berlin, Germany.; 2023. URL: https://atlasti.com/ [accessed 2026-01-16]
- Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77-101. [CrossRef]
- Boddy CR. Sample size for qualitative research. Qual Mark Res. 2016;19(4):426-432. [CrossRef]
- Malgaroli M, Hull TD, Zech JM, Althoff T. Natural language processing for mental health interventions: a systematic review and research framework. Transl Psychiatry. 2023;13(1):309. [FREE Full text] [CrossRef] [Medline]
- Flanagan O, Chan A, Roop P, Sundram F. Using acoustic speech patterns from smartphones to investigate mood disorders: scoping review. JMIR Mhealth Uhealth. 2021;9(9):e24352. [FREE Full text] [CrossRef] [Medline]
- Schoenberg R. Digital health companions: where caregivers can’t go. NEJM Catal Innov Care Deliv. 2025;6(6). [CrossRef]
Abbreviations
| AI: artificial intelligence |
| AIM: acceptability of intervention measure |
| AANHPI: Asian American, Native Hawaiian, and Pacific Islander |
| CARE: Collaborative Approach for AANHPI Research and Education |
| CDC: Centers for Disease Control and Prevention |
| D&A: Depression and Anxiety |
| eCARE: Ellipsis Caregiver Assessment Enhancement |
| GAD-7: Generalized Anxiety Disorder–7 |
| HIPAA: Health Insurance Portability and Accountability Act |
| HRQOL-4: Health-Related Quality of Life – 4 item Healthy Days Measure |
| IOS: Inclusion of the Other in the Self Scale |
| IRB: Institutional Review Board |
| NAMI: National Alliance on Mental Illness |
| NCCN: National Comprehensive Cancer Network |
| PHQ-9: Patient Health Questionnaire–9 |
| PPRS: Perceived Partner Responsiveness Scale |
| SCS: Social Constraints Scale |
| UES-SF: User Engagement Scale–Short Form |
| ZBI-12: Zarit Burden Interview–Short Form (12 items) |
Edited by A Schwartz; The proposal for this study was peer reviewed by the University of Houston SEED Grant Review Committee. See the Multimedia Appendix for the peer-review report; submitted 30.Aug.2025; accepted 31.Dec.2025; published 12.Feb.2026.
Copyright©Chiara Acquati, Michael Aratow, Tahmida Nazreen, Arunima Bhattacharjee, Isabella K Marra, Ashley S Alexander. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 12.Feb.2026.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.

