Protocol
Abstract
Background: Anhedonia is a core symptom of depressive disorders. Although group-level cutoff points for the Snaith–Hamilton Pleasure Scale (SHAPS) are established, real-time, individual-level monitoring remains limited. Ecological momentary assessment (EMA) enables high-frequency sampling, and simple moving averages (SMA; 7/14/30 days) offer an interpretable way to smooth daily signals and surface early-warning trends.
Objective: The aim of this study was to develop and validate a cross-device platform that collects EMA-based SHAPS responses and visualizes SMA trends for individual anhedonia monitoring. This part I protocol focuses on platform design, feasibility, usability, data quality, and initial analytical validity to inform subsequent evaluation.
Methods: We will conduct a single-arm methodological pilot in approximately 24 adults, recruited from an outpatient psychiatry clinic (clinical cohort) and a university setting (nonclinical cohort). After baseline assessment, participants will complete 30 days of daily EMA-SHAPS. Primary outcomes are feasibility (adherence, missingness patterns, timing fidelity) and usability/face validity (eg, System Usability Scale, qualitative feedback). We will also describe expert content validity ratings for key visual elements and exploratory analytic checks of within-person variability and the descriptive behavior of 7/14/30-day moving averages; these analyses are not powered for confirmatory hypothesis testing.
Results: This protocol specifies the design–validate sequence, recruitment procedures, and predefined adherence and missing-data rules for computing moving averages. Recruitment for the year-1 feasibility pilot is scheduled to begin in March 2026 following institutional review board approval (approval 114006; approval date: June 10, 2025), with feasibility metrics, usability indices, and parameter estimates used to refine alert logic and power calculations for a planned part II evaluation.
Conclusions: Combining EMA-SHAPS with SMA-based trend visualizations may provide an interpretable way to summarize daily anhedonia signals and generate candidate early-warning indicators. This methodological pilot will deliver feasibility, usability, and preliminary analytic parameters needed to support a subsequent controlled evaluation of the platform’s clinical utility.
International Registered Report Identifier (IRRID): PRR1-10.2196/84024
doi:10.2196/84024
Keywords
Introduction
Background
Major depressive disorder is a leading cause of disability worldwide, with enduring effects on social functioning, quality of life, and health care utilization []. Among its heterogeneous features, anhedonia—the diminished capacity to experience pleasure—is both a core diagnostic criterion and a prognostically salient dimension linked to poorer outcomes [,].
The Snaith–Hamilton Pleasure Scale (SHAPS) is one of the most widely used self-report instruments for assessing hedonic tone, with robust psychometric support across clinical and nonclinical samples [,]. Cross-cultural work has further demonstrated utility in Chinese-speaking contexts, facilitating screening and risk stratification []. Building on this literature, our prior work established group-level SHAPS cutoff points to differentiate clinical from nonclinical cases, enabling population-level identification of high-risk groups [].
However, thresholds optimized for between-person discrimination are not designed for dynamic, within-person monitoring. Traditional assessments also rely on retrospective recall, which is vulnerable to memory biases and insensitive to short-timescale fluctuations [-]. Ecological momentary assessment (EMA) addresses these limitations by repeatedly sampling experiences in naturalistic settings, enabling high-frequency, context-sensitive tracking of affect and behavior [-]. Converging evidence suggests that changes in momentary affect and variance can precede transitions in depressive states, motivating early-warning approaches that leverage intensive longitudinal data [-].
The analytic challenge is to convert dense EMA streams into signals that are both statistically principled and clinically interpretable. Simple moving averages (SMAs) provide an intuitive smoothing approach long established in time-series analysis [-] and are closely related to exponentially weighted moving averages used for sensitive change detection []. Short-window averages emphasize responsiveness to recent changes, whereas longer windows emphasize stability; jointly visualized, they can reveal trend direction, volatility, and potential regime shifts in a manner that is easily communicated to clinicians. In parallel domains, rule-based use of multiple moving averages has been shown to detect regime changes in complex noisy systems, underscoring the translational potential of this interpretable family of filters [,]. In public health surveillance, related smoothing and alignment procedures such as delay correction and multi‑indicator integration are routinely applied to stabilize noisy signals and improve interpretability [,].
At the delivery layer, smartphone-enabled platforms now make it feasible to deploy EMA at scale and integrate active reports with passive sensing, a paradigm often described as digital phenotyping [,]. For anhedonia monitoring, combining EMA-based SHAPS with SMA-based trend visualization may offer a pragmatic balance between statistical rigor and bedside interpretability: short/medium/long SMA windows (7/14/30 d) support multi-timescale inspection, while alert candidates (eg, fast–slow crossovers; deviations from individualized baselines) can be parameterized and evaluated within a transparent framework grounded in prior early-warning theory [-,].
To our knowledge, this protocol is the first to systematically integrate EMA-SHAPS with SMA-based trend/alert visualization in a cross-device platform and to prospectively evaluate feasibility, usability, and initial analytical validity. Part I (this study) emphasizes platform design, human-factors evaluation, and parameter estimation to finalize alert logic (see for feasibility of borrowing moving‑average theory for emotional monitoring).
| Indicator/rule | How it works (moving average domain) | Mapping to psychiatry (EMAa‑SHAPSb) | Clinical interpretation and part I feasibility |
| Short‑window SMAc (7 d) | Emphasizes most recent observations, fast response to changes, higher sensitivity to noise | Smooths day‑to‑day SHAPS to expose short‑horizon fluctuations in hedonic tone | Early movement detector; may flag brief dips/spikes. Feasible with 30‑d diary; risk of false positives mitigated by corroboration with 14/30‑d windows |
| Medium‑window SMA (14 d) | Balances responsiveness and stability, dampens idiosyncratic daily noise | Tracks fortnight trends in anhedonia burden, complements 7‑d and 30‑d perspectives | Context anchor for “meaningful change.” Feasible and robust in 30‑d series; supports cross‑window triangulation |
| Long‑window SMA (30 d) | Maximal smoothing within part I; highlights macrotrend and regime shifts | Approximates a monthly hedonic baseline for each participant | Stability reference; fewer false alarms but slower to turn. Feasible by design (30‑d pilot). |
| Fast–slow crossover (7 over 30) | When short SMA crosses long SMA, indicates potential regime change in level or trend. | Up‑cross: deterioration momentum in anhedonia; down‑cross: improvement momentum | Operational early‑warning candidate; parameterized and logged without triggering actions in part I |
aEMA: ecological momentary assessment.
bSHAPS: Snaith–Hamilton Pleasure Scale.
cSMA: simple moving average.
Objectives (Part I)
This part I study specifies development and preliminary validation objectives for a cross-device SHAPS platform with 7/14/30-day SMA trend and alert visualizations. The focus is on feasibility, usability, content validity, and exploratory analytic behavior of moving-average–based indicators, with the aim of informing a subsequent, adequately powered part II evaluation.
Primary Objective
The primary objective is to develop a cross-device platform that (1) collects daily EMA-based SHAPS responses for 30 days, (2) computes and visualizes SMA (7/14/30 d) trends, and (3) implements candidate alert logic, including fast–slow moving-average crossovers and deviations from individualized baselines.
Secondary Objectives
The secondary objectives are as follows:
- Feasibility and adherence: quantify completion/adherence rates, missingness patterns, and timing fidelity of entries.
- Usability and face validity: evaluate user experience (eg, System Usability Scale) and perceived interpretability/acceptability of trend and alert panels.
- Expert content validity: obtain expert ratings of content relevance/clarity for key interface elements and alert explanations (eg, content validity index).
- Analytical checks: provide descriptive summaries of internal consistency of EMA-SHAPS, within-person variability, and the behavior of 7/14/30-day SMA windows (eg, correlations between daily SHAPS and windowed averages, simple indicators of trend stability), recognizing that this pilot is not powered for robust inferential tests.
- Exploratory objectives: Estimate plausible parameter ranges for alert logic (eg, crossover lengths, thresholds, and persistence rules) and describe how these candidate rules behave in this small pilot sample, with the sole purpose of refining design assumptions and power calculations for a planned part II evaluation against an individual-level SHAPS benchmark.
Methods
Study Design
This part I development and validation study follows a design‑validate sequence—needs assessment, system design and implementation, and a 30‑day pilot in clinical and nonclinical cohorts—to evaluate feasibility and validity. A month-by-month timeline of the design–validate sequence and key milestones is provided in .
| Activities | August 2025 | September | October | November | December | January 2026 | February | March | April | May | June | July 2026 |
| Requirements analysis and system architecture design | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||
| Frontend/backend development and core feature implementation | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||
| System testing and performance optimization | ✓ | ✓ | ✓ | |||||||||
| Platform validation: face validity and user experience testing | ✓ | |||||||||||
| Platform validation: content validity | ✓ | |||||||||||
| Pilot: participant recruitment and screening | ✓ | ✓ | ✓ | |||||||||
| Pilot: 30-d EMAa-SHAPSb data collection | ✓ | ✓ | ✓ | |||||||||
| Pilot: user feedback synthesis | ✓ | ✓ | ✓ | |||||||||
| Results integration and report writing | ✓ | ✓ | ||||||||||
| Synthesis of development outputs/preparation for year 2 | ✓ | ✓ |
aEMA: ecological momentary assessment.
bSHAPS: Snaith–Hamilton Pleasure Scale.
Measurements
The following measurements were performed.
Demographic Characteristics
The cross-device SHAPS platform will collect demographic data on gender, age, educational attainment, employment, marriage, region of residence, and socioeconomic status in both groups.
SHAPS
The cross-device SHAPS platform will measure anhedonia by using SHAPS, a 14-item self-report measure that captures recent hedonic experience across 4 domains: interests/pastimes, social interaction, sensory experience, and food/drink []. Multiple language versions are available, including simplified and traditional Chinese. The simplified Chinese version uses a 4-point Likert response format from “definitely agree” to “definitely disagree,” yielding total scores from 14 to 56, with reported test-retest reliability of 0.64 and Cronbach α of 0.85 []. The traditional Chinese version has demonstrated good readability and ease of response in pilot testing (n=13), indicating favorable face validity, and has shown test–retest reliability of 0.87 with Cronbach α of 0.91. Convergent and discriminant validity are supported by positive correlations with Patient Health Questionnaire-9 items (r=0.52) and the Positive and Negative Suicide Ideation–Negative Suicidal Ideation subscale (r=0.28), and negative correlations with self-esteem (r=−0.51) and the Positive and Negative Suicide Ideation–Positive Ideation subscale (r=−0.54). Collectively, these findings indicate that the Chinese versions of the SHAPS are reliable and valid [].
Study Population and Setting
Participants will be recruited from two settings: (1) an outpatient psychiatry clinic at the collaborating medical center (clinical cohort) and (2) a university campus (nonclinical cohort). The clinical cohort will consist of adults aged 18-65 years with a treating psychiatrist–assigned ICD-10 (International Classification of Diseases, 10th Revision) mood disorder diagnosis in the F32-F39 range documented in the medical record (eg, depressive episode [F32.-], recurrent depressive disorder [F33.-], other persistent mood disorders [F34.-, F34.9], and unspecified mood disorder [F39.-]). The nonclinical cohort will consist of university students aged 18-23 years recruited from campus settings as an age-restricted comparison group. After baseline assessment, all participants will be invited to complete 30 consecutive days of daily EMA-SHAPS entries by using the cross-device platform.
Eligibility Criteria
Inclusion Criteria
Inclusion criteria (both cohorts) are as follows: adults aged ≥18 years with sufficient capacity to provide informed consent, ability to communicate in Chinese, regular access to a smartphone or web-enabled device, and willingness to spend approximately 15-20 minutes completing the baseline questionnaire battery. For the clinical cohort, participants must be outpatients aged 18-65 years at the collaborating psychiatry clinic with a treating psychiatrist–assigned ICD-10 mood disorder diagnosis in the F32-F39 range documented in the medical record. For the nonclinical cohort, participants must be university students aged 18-23 years recruited from campus settings.
Exclusion Criteria
The exclusion criteria are as follows: other severe mental disorders such as schizophrenia-spectrum disorders or bipolar disorder or neurological/medical conditions that substantially impair cognitive function; acute high suicide risk at screening, as judged by the treating clinician; and other medical or functional reasons that would prevent the participant from using the digital platform or from sustaining repeated questionnaire completion. These exclusion criteria apply to both cohorts and are intended to ensure participant safety and the feasibility of the 30-day EMA protocol.
Measures and Data Collection
The following measures were used:
- SHAPS with EMA prompts (14 items, 4‑point responses) at baseline and scheduled intervals via the app.
- Usability/face‑validity questionnaires; expert content validity indexing for platform features and visualizations.
For the clinical cohort, primary psychiatric diagnosis, comorbid conditions, and concurrent treatments (eg, psychotropic medications, psychotherapy) will be extracted from the medical record at baseline. For the nonclinical cohort, participants will complete a brief self-report form on past and current mental health treatment. These variables will be summarized descriptively to characterize the sample and to explore whether anhedonia trajectories differ by diagnostic profile or treatment exposure; no stratified hypothesis tests are planned in this pilot.
Feasibility outcomes will focus on adherence, timing fidelity, and patterns of missing data. For each participant, adherence will be defined as the proportion of completed daily EMA-SHAPS entries out of the 30 scheduled days. A priori, we consider adherence of ≥70% (≥21 of 30 d completed) and the absence of any run of ≥5 consecutive missing days as an acceptable benchmark for individual-level feasibility. Timing fidelity will be defined as the proportion of completed entries submitted on the same calendar day as the scheduled prompt, with ≥80% same-day completion regarded as an acceptable benchmark. We will summarize the distribution of these metrics and report the proportion of participants who meet the adherence and timing thresholds in the clinical and nonclinical cohorts separately.
Digital Platform and Moving Average Indicators
The digital platform will have the following features.
- Cross‑device capture, secure backend, dashboards for participants and clinicians
- Visualization of SMA (7/14/30 d); candidate alert logic (fast–slow moving average crossover; deviation from individualized baselines)
- Data quality monitoring for missingness and compliance; encryption for data in transit and at rest
An example of the end‑to‑end user flow is shown in , and the trend/alert panel is shown in .


Sample Size and Statistical Analysis
As a development and validation pilot, part I is designed primarily to provide feasibility, usability, and descriptive analytic information rather than to support formal hypothesis testing. We will recruit approximately 24 participants (≈12 per cohort), consistent with a widely cited “12-per-group” rule of thumb for pilot and feasibility studies that aim to estimate variability rather than detect small effects []. All primary analyses will be descriptive, reporting means, standard deviations, medians, interquartile ranges, and 95% CIs, where informative.
For each participant and each time window, 7-, 14-, and 30-day SMAs of EMA-SHAPS scores will be computed only when at least 50% of days in the window contain valid entries (ie, ≥4 of 7, ≥7 of 14, and ≥15 of 30 d, respectively). Windows that do not meet this criterion will be flagged as data sparse and excluded from primary SMA summaries. We will distinguish intermittent missingness (isolated missing days or gaps of <5 consecutive days) from systematic missingness (gaps of ≥5 consecutive missing days or early dropout). Primary analyses will not impute missing values; instead, we will conduct sensitivity analyses that (1) compare SMA summaries with and without windows spanning systematic missingness and (2) examine how adherence levels relate to the stability and interpretability of SMA-based indicators. Internal consistency of EMA-SHAPS will be summarized using Cronbach α, and exploratory correlations between daily scores and SMA windows will be reported with appropriate caution, given the limited sample size.
Ethical Considerations
This study protocol was approved by the Institutional Review Board of Tsaotun Psychiatric Center, Ministry of Health and Welfare, Taiwan (approval 114006). All participants will provide written informed consent before any study procedures. During consent, participants are informed that participation is voluntary, that they may decline to answer any question, and that they may withdraw from the study at any time without any consequences for their clinical care or academic standing. The collected data will be pseudonymized so that no personally identifiable information is included, and only the research team will have access to the analytic data set. All data will be stored securely on password-protected servers for at least 7 years after study completion. As a token of appreciation, participants in each cohort will receive a gift voucher valued at NT $100 (approximately US $3) upon completion of the 30-day EMA-SHAPS assessments and end-of-pilot questionnaires.
Results
Recruitment for the year-1 feasibility pilot will begin following institutional review board approval (approval date: June 10, 2025) and completion of platform qualification, with data collection scheduled from March 2026 through May 2026. No data have yet been collected or analyzed. Enrollment will proceed in parallel at the clinical and nonclinical sites until the predefined target sample is reached. Because each participant contributes a 30-day EMA-SHAPS series, the last participant is expected to complete end-of-pilot assessments approximately 30 days after enrollment closure. Data analyses will then be performed and reported. A month-by-month timeline of the planned milestones is provided in .
In this year-1 feasibility pilot, we also expect to produce a working cross-device prototype that delivers daily EMA-SHAPS assessments and visualizes 7-, 14-, and 30-day SMA trends at the individual level. We will obtain expert-rated content validity indices for key interface elements and alert explanations, together with participant-reported usability and interpretability metrics (eg, System Usability Scale scores). In addition, we will generate feasibility metrics for EMA completion, patterns of missingness, and timing fidelity, as well as a descriptive characterization of moving average–based indicators (eg, crossovers, deviations from individualized baselines, and gaps in data coverage) and preliminary parameter ranges that can be carried forward to refine the design and powering of part II.
Discussion
Principal Anticipated Findings
Year 1 is expected to deliver a functioning cross-device platform that supports daily EMA-SHAPS data collection and visualizes 7/14/30-day SMA trends at the individual level. If recruitment and adherence targets are met, the pilot will provide (1) feasibility metrics for daily anhedonia monitoring across clinical and nonclinical cohorts, (2) usability and perceived interpretability indices for the user interface and trend/alert panels, (3) expert content-validity ratings for key visual elements and explanations, and (4) preliminary descriptive information on the behavior of SMA-based indicators under real-world missingness patterns. These outputs are intended to inform refinement of platform features and alert parameters and to guide the design and powering of a subsequent controlled evaluation rather than to establish clinical efficacy on their own.
Comparison With Prior Work
Existing digital mental health platforms increasingly use smartphones and wearables to capture intensive longitudinal symptom data and ecological context [-,,]. However, most current anhedonia assessments still rely on cross-sectional questionnaires or infrequent follow-ups and do not systematically integrate early-warning indicators derived from time-series analysis or moving average theory [-]. By focusing on EMA-SHAPS and SMAs, this protocol adapts tools that have been widely used in engineering and epidemiologic surveillance [-,-] to a clinically meaningful affective dimension. Rather than proposing complex forecasting models, we prioritize transparent, user-friendly visualizations (eg, short/long SMA crossovers, deviations from individualized baselines) that can be explained to patients and clinicians and empirically calibrated in later phases.
For anhedonia specifically, prior work has validated SHAPS cutoff scores for between-person discrimination [-], but there is little guidance on how within-person fluctuations should be summarized or visualized for ongoing self-management. This pilot responds to that gap by testing whether an SMA-based dashboard is acceptable and interpretable to both end users and experts while keeping analytic choices simple enough to be incorporated into routine care if later evaluations support benefit.
Strengths and Limitations
The proposed study has several strengths. It uses a cross-device platform that mirrors real-world technology use, includes both clinical and nonclinical cohorts to sample a range of anhedonia severity and treatment contexts, and prespecifies adherence thresholds and missing-data rules for computing SMA indicators. The protocol also combines quantitative feasibility metrics with expert content-validity ratings, providing converging information about whether the platform is usable and conceptually sound.
At the same time, key limitations reflect the pilot nature of part I. The planned sample size (~24 participants) is sufficient for feasibility and usability assessment but underpowered for robust inferential tests of trend stability, variance components, or alert performance. All analytic checks of SMA behavior must therefore be interpreted as preliminary and descriptive. In addition, recruitment from a single outpatient clinic and 1 university limits generalizability, and we will not examine clinical outcome changes or treatment decisions in this phase. Anticipated practical challenges include sustaining EMA engagement over 30 days, handling missing and noisy data, managing cross-device heterogeneity (eg, operating systems, screen sizes), and calibrating individualized alert thresholds without overtriggering. To partially mitigate these challenges, we prespecify adherence and timing benchmarks, implement in-app reminders and data-quality monitoring, standardize onboarding and support procedures, and treat all alert parameters as modifiable candidates for later evaluation rather than fixed clinical rules.
Future Directions
If the pilot demonstrates acceptable feasibility, usability, and content validity, subsequent work (part II) will focus on evaluating the analytic and clinical performance of moving average–based indicators in larger and more diverse samples. This may include testing whether specific SMA-derived features (eg, crossing events, sustained deviations from baseline) predict movement toward or away from validated SHAPS cutoff points or broader depressive symptom outcomes, and whether providing dashboard feedback to patients and clinicians improves shared understanding of anhedonia trajectories. Over time, the platform could be extended to incorporate additional EMA items, passive sensor data, or adaptive sampling schedules, provided that the resulting complexity remains interpretable for end users.
Dissemination Plan
In line with the emphasis on transparent and reusable digital health research, we plan to disseminate findings from this methodological pilot through peer-reviewed publications and conference presentations in psychiatry, nursing, and digital mental health. Subject to institutional and funder policies, we will also consider sharing nonidentifiable platform specifications, example pseudonymized dashboards, and analytic codes for SMA-based indicators via institutional repositories or open-source platforms. These dissemination activities aim to support replication, integration into related research programs, and eventual translation into routine mental health care if later evaluations demonstrate clinical utility.
Conclusion
Combining EMA-SHAPS with SMA-based trend visualizations may provide an interpretable way to summarize daily anhedonia signals and to generate candidate early-warning indicators for clinical and self-management use. This year-1 methodological pilot is not intended to establish clinical efficacy; rather, it will deliver feasibility, usability, content-validity evidence, and preliminary analytic parameters for 7/14/30-day SMA-based indicators. These outputs will be used to refine platform features and alert logic and to support the design and powering of a subsequent controlled evaluation of the platform’s clinical utility.
Funding
This work was supported by the National Science and Technology Council, Taiwan (grant 114-2314-B-468-006). The funder had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Conflicts of Interest
None declared.
Peer-review report from the National Science and Technology Council (NSTC), Taiwan.
PDF File (Adobe PDF File), 253 KBReferences
- Depressive disorder (depression). World Health Organization. Aug 29, 2025. URL: https://www.who.int/news-room/fact-sheets/detail/depression [accessed 2025-12-15]
- Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR). Washington, DC. American Psychiatric Association Publishing; Mar 18, 2022.
- Pizzagalli DA. Depression, stress, and anhedonia: toward a synthesis and integrated model. Annu Rev Clin Psychol. 2014;10:393-423. [FREE Full text] [CrossRef] [Medline]
- Snaith RP, Hamilton M, Morley S, Humayan A, Hargreaves D, Trigwell P. A scale for the assessment of hedonic tone the Snaith-Hamilton Pleasure Scale. Br J Psychiatry. Jul 1995;167(1):99-103. [CrossRef] [Medline]
- Franken IHA, Rassin E, Muris P. The assessment of anhedonia in clinical and non-clinical populations: further validation of the Snaith-Hamilton Pleasure Scale (SHAPS). J Affect Disord. Apr 2007;99(1-3):83-89. [CrossRef] [Medline]
- Liu W, Wang L, Zhu Y, Li M, Chan RCK. Clinical utility of the Snaith-Hamilton-Pleasure scale in the Chinese settings. BMC Psychiatry. Oct 31, 2012;12:184. [FREE Full text] [CrossRef] [Medline]
- Ho Y, Gau SS, Wu Y, Chen C, Wang J, Lee H, et al. Determining cut-off values and predictors for the Snaith-Hamilton Pleasure Scale: comparison between clinical and school settings. BJPsych Open. May 09, 2024;10(3):e106. [FREE Full text] [CrossRef] [Medline]
- Stone A, Shiffman S, Schwartz J, Broderick J, Hufford M. Patient non-compliance with paper diaries. BMJ. May 18, 2002;324(7347):1193-1194. [FREE Full text] [CrossRef] [Medline]
- Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1-32. [CrossRef] [Medline]
- Trull TJ, Ebner-Priemer U. Ambulatory assessment. Annu Rev Clin Psychol. 2013;9:151-176. [FREE Full text] [CrossRef] [Medline]
- Myin-Germeys I, Kasanova Z, Vaessen T, Vachon H, Kirtley O, Viechtbauer W, et al. Experience sampling methodology in mental health research: new insights and technical developments. World Psychiatry. Jun 2018;17(2):123-132. [FREE Full text] [CrossRef] [Medline]
- van de Leemput IA, Wichers M, Cramer AOJ, Borsboom D, Tuerlinckx F, Kuppens P, et al. Critical slowing down as early warning for the onset and termination of depression. Proc Natl Acad Sci U S A. Jan 07, 2014;111(1):87-92. [FREE Full text] [CrossRef] [Medline]
- Curtiss JE, Mischoulon D, Fisher LB, Cusin C, Fedor S, Picard RW, et al. Rising early warning signals in affect associated with future changes in depression: a dynamical systems approach. Psychol Med. Dec 23, 2021;53(7):3124-3132. [CrossRef]
- Helmich MA, Smit AC, Bringmann LF, Schreuder MJ, Oldehinkel AJ, Wichers M, et al. Detecting impending symptom transitions using early-warning signals in individuals receiving treatment for depression. Clinical Psychological Science. Dec 01, 2022;11(6):994-1010. [CrossRef]
- Oppenheim A, Schafer R. Discrete-Time Signal Processing 3rd edition. Upper Saddle River, NJ. Pearson Eduction Inc; 2010.
- Box GEP, Jenkins GM, Reinsel GC, Ljung GM. Time Series Analysis: Forecasting and Control. Hoboken, New Jersey, USA. Wiley; Jun 29, 2015.
- Hyndman RJ, Athanasopoulos G. Forecasting: Principles and Practice 3rd ed. Melbourne, Australia. OTexts; 2021.
- Lucas JM, Saccucci MS. Exponentially weighted moving average control schemes: properties and enhancements. Technometrics. Feb 1990;32(1):1. [CrossRef]
- Brock W, Lakonishok J, LeBaron B. Simple technical trading rules and the stochastic properties of stock returns. J Finance. Apr 30, 2012;47(5):1731-1764. [CrossRef]
- Lo A, Mamaysky H, Wang J. Foundations of technical analysis: computational algorithms, statistical inference, and empirical implementation. The Journal of Finance. Dec 17, 2002;55(4):1705-1765. [FREE Full text] [CrossRef]
- Buckingham-Jeffery E, Morbey R, House T, Elliot AJ, Harcourt S, Smith GE. Correcting for day of the week and public holiday effects: improving a national daily syndromic surveillance service for detecting public health threats. BMC Public Health. May 19, 2017;17(1):477. [FREE Full text] [CrossRef] [Medline]
- Scobie HM, Panaggio M, Binder AM, Gallagher ME, Duck WM, Graff P, et al. Correlations and timeliness of COVID-19 surveillance data sources and indicators - United States, October 1, 2020-March 22, 2023. MMWR Morb Mortal Wkly Rep. May 12, 2023;72(19):529-535. [FREE Full text] [CrossRef] [Medline]
- Onnela J, Rauch SL. Harnessing smartphone-based digital phenotyping to enhance behavioral and mental health. Neuropsychopharmacology. Jun 2016;41(7):1691-1696. [FREE Full text] [CrossRef] [Medline]
- Torous J, Kiang MV, Lorme J, Onnela J. New tools for new research in psychiatry: a scalable and customizable platform to empower data driven smartphone research. JMIR Ment Health. May 05, 2016;3(2):e16. [FREE Full text] [CrossRef] [Medline]
- Ho YC, Wu YS, Chen CH, Wang JK, Lee HC, Chung KH. The psychometric properties of the Chinese version Snaith-Hamilton Pleasure Scale in Taiwan [theses] (forthcoming). Taiwan. National Yang Ming Chiao Tung University, Taipei Medical University; 2022.
- Julious S. Sample size of 12 per group rule of thumb for a pilot study. Pharmaceutical Statistics. Nov 24, 2005;4(4):287-291. [FREE Full text] [CrossRef]
Abbreviations
| EMA: ecological momentary assessment |
| ICD-10: International Classification of Diseases, 10th Revision |
| SHAPS: Snaith–Hamilton Pleasure Scale |
| SMA: simple moving average |
Edited by J Sarvestan; The proposal for this study was peer reviewed by the National Science and Technology Council (NSTC), Taiwan. See Multimedia Appendix 1 for the peer-review report; submitted 12.Sep.2025; accepted 05.Dec.2025; published 18.Dec.2025.
Copyright©Yen-Chung Ho, Chia-Ling Fang, Yung-Chieh Ching, Hsiu-Ju Chang, Jiun-Yi Wang, Charles CN Wang. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 18.Dec.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.

