Sie können Operatoren mit Ihrer Suchanfrage kombinieren, um diese noch präziser einzugrenzen. Klicken Sie auf den Suchoperator, um eine Erklärung seiner Funktionsweise anzuzeigen.
Findet Dokumente, in denen beide Begriffe in beliebiger Reihenfolge innerhalb von maximal n Worten zueinander stehen. Empfehlung: Wählen Sie zwischen 15 und 30 als maximale Wortanzahl (z.B. NEAR(hybrid, antrieb, 20)).
Findet Dokumente, in denen der Begriff in Wortvarianten vorkommt, wobei diese VOR, HINTER oder VOR und HINTER dem Suchbegriff anschließen können (z.B., leichtbau*, *leichtbau, *leichtbau*).
Dieses Kapitel geht den Herausforderungen nach, hohe Reaktionsraten aufrechtzuerhalten und die ökologische Validität in ESM-Studien (Experience Sampling Method) sicherzustellen. Es untersucht den Einsatz von online überwachtem Lernen und Edge Computing auf Smartwatches, um das Timing der Benachrichtigung auf Grundlage des Verhaltens und Kontextes der Teilnehmer zu personalisieren. Die Studie stellt eine neuartige Architektur und einen Algorithmus zur Optimierung der Benachrichtigungsplanung vor, die darauf abzielt, die Reaktionsraten und die Abtastdichte zu verbessern. Erste Ergebnisse einer ESM-Pilotstudie zeigen höhere Ansprechraten und kürzere Reaktionszeiten im Vergleich zu herkömmlichen intervallbasierten Stichprobenmethoden. Das Kapitel diskutiert auch die Bedeutung der Bekämpfung des Ungleichgewichts der Datensätze und das Potenzial des föderierten Lernens zum Schutz der Privatsphäre der Nutzer. Es schließt mit Richtlinien für zukünftige Forschung, einschließlich der Erforschung von Verstärkungslernen und Metaanalysetechniken für personalisiertere Benachrichtigungsstrategien.
KI-Generiert
Diese Zusammenfassung des Fachinhalts wurde mit Hilfe von KI generiert.
Abstract
Powered by smartphones and wearable devices, the Experience Sampling Method (ESM) has increased in popularity for studying behaviors, thoughts, and experiences over time and in situ. Participants in ESM studies receive several notifications a day to self-report but often disengage due to intrusive and poorly timed notifications. Consequently, the response rate drops over time, hampering data collection and degrading ecological validity. Researchers have experimented with various strategies to optimize notification scheduling, including personalization, context sensing, and machine learning (ML). Edge computing can facilitate the training of ML models without the need for server communications, which is especially convenient for in-the-wild studies with unreliable network connectivity. Complementary logical evaluations on edge devices can minimize participant burden by accounting for sampling density, i.e., ensuring a minimum number of well-distributed daily notifications. However, these efforts raise engineering and scientific challenges related to avoiding cold start and training models on smartwatches. To overcome these challenges, we propose an open-source architecture and software that facilitates online learning to optimize notification delivery. Our feasibility study with \(N=37\) participants resulted in a response rate of 10.2% higher and a reaction time of 9.6% lower on average compared to the classical interval-based sampling.
1 Introduction
The Experience Sampling Method (ESM) is a popular approach for studying behaviors, thoughts, and feelings in real-world settings, offering higher ecological validity compared to traditional retrospective self-report methods [8, 17]. ESM involves collecting self-report data multiple times throughout the day via notifications sent to participants’ devices, typically smartphones or smartwatches. The effectiveness of ESM studies is hindered by response rates that decrease over time, leading to the dropout of participants [11, 39]. This decline is often attributed to notifications that are intrusive and poorly timed [11].
Adapting the timing of ESM notifications to participant response behavior has been shown to mitigate these issues and increase response rates [4, 29]. Intelligent sensor-driven machine learning (ML) models can further improve the timeliness of notifications by leveraging contextual information from wearable devices such as smartwatches [13, 32]. Wearable ESM (wESM) systems, which utilize sensors in smartwatches to enhance context awareness of the ESM sampling strategy, can potentially improve the ecological validity of ESM studies [22].
Anzeige
Using sensor data to trigger notifications may not be sufficient, as it does not capture nuanced aspects of participants’ context [24, 25]. Personalizing the timing of notifications based on user preferences [9] and predictive modeling has shown promise in addressing this limitation [1, 21, 24, 41]. Unfortunately, such personalization could incur sampling bias, as responses may cluster around specific events or times of day that are convenient to the user [25]. This phenomenon is detrimental to the ecological validity of the ESM, which aims to sample a variety of contexts and situations (not necessarily the times most convenient to the respondent) so that self-reporting is done close to the occurrence of events of interest as they unfold in natural situations of daily life [16].
To address these challenges and optimize the timing of the notification, we investigated using supervised online learning [15] on smartwatches. Our approach involves training a general model of the response behavior of the ESM participant (whether and how they respond to the prompts) on population-level data and transferring it to edge devices used for the ESM upon initialization. Subsequently, the model is personalized locally based on user interactions with notifications and moments of self-reporting. To improve ecological validity, our aim is to ensure sampling density, i.e., a minimum number of well-spread notifications per day. ML-based personalization occurs within predefined time windows, and if a window closes without a notification, one is sent to the participant regardless. This ensures that a minimum number of notifications are issued, respecting a designated inter-notification interval. This constraint is specifically important in the context of ESM, where researchers require a minimum number of notifications spread throughout the day, increasing the ecological validity of data collection.
In this paper, we present an architecture and an algorithm of our open-source software implementation to facilitate online learning with complementary windowing logic to optimize notification scheduling in ESM studies. We present the preliminary findings of a pilot ESM study, which demonstrated higher response rates and shorter reaction times compared to traditional interval-based sampling methods. In addition, we provide guidelines for employing reinforcement learning techniques that prioritize both ecological validity and timely notification delivery.
The remainder of this paper is organized as follows. Section 2 reviews related work on ESM and wearable technology. Section 3 outlines the methods used in our study. Section 4 presents the results of our pilot ESM study. In Sect. 5, we discuss the implications of our findings and future research directions. Finally, Sect. 6 concludes the paper.
Anzeige
2 Related Work
Mobile notifications are crucial in engaging users with timely and relevant information, influencing their response rates and overall engagement. Following advances in ML techniques, researchers have explored personalized notification delivery strategies to optimize response rates and enhance user engagement in ESM studies. This section reviews earlier work on the effectiveness of ML-based personalization, the impact of notification timing on response rates and engagement, and the use of ML to optimize notification delivery.
Research suggests that ML-based personalization improves the effectiveness of notification delivery by tailoring it to individual user preferences and contexts. Iqbal et al. demonstrated that scheduling notifications at break points based on content relevance can reduce frustration and reaction time [19]. Muralidharan et al. showcased successful ML implementations on LinkedIn, optimizing notification timing, frequency, and channel selection to encourage long-term user engagement [33]. These findings collectively support the idea that the application of ML in personalization enhances user experiences and responses to notifications.
The timing of mobile notifications influences response rates and user engagement. Avraham Bahir et al. observed that visually enhanced notifications and those sent during specific times of the day yield higher response rates [1]. Morrison et al. found that frequent notifications tailored to the user context increase exposure to intervention content without deterring engagement [32]. Balebako et al. emphasized that timing nuances, such as displaying privacy notices during app use, can impact recall rates [3]. However, Bidargaddi et al. [6] and Pham et al. [35] report mixed results, suggesting the need for further exploration of effective timing and frequency strategies.
Personalizing the timing of notifications has shown promise in enhancing user engagement, although results are mixed. Avraham Bahir et al. highlighted the effectiveness of contextually tailored messages, especially on weekends and at mid-day [1]. Okoshi et al. demonstrated that a delay in delivering notifications until the user may be interrupted can increase engagement [34]. Khanshan et al. found a significant difference in the response rate between study groups that received notifications during different levels of physical activity, indicating the importance of context sensitivity [24]. However, Morrison et al. cautioned that adaptive tailoring of timing does not consistently enhance response rates [32]. These works underscore the importance of considering individual preferences and contexts to optimize engagement.
Recent studies have explored how ML can help personalize notification timing, providing information on effective strategies. Gonul et al. proposed a reinforcement learning-based algorithm considering momentary context data for optimized notification delivery [10]. Poppinga et al. developed a model predicting opportune notification moments based on mobile context data [36]. Li et al. demonstrated that personalized ML models using user actions significantly improve prediction performance [27]. A range of ML features, including reinforcement learning (e.g., [42]), preference learning (e.g., [30, 31]), and time-aware recommendation models (e.g., [43]), contribute to the personalization of mobile notification timing.
In the smartwatch domain, reinforcement learning and deep learning techniques have been used to optimize notification delivery. Ho et al. utilized reinforcement learning to identify optimal notification timing, enhancing response rates using smartphones and wristbands [14]. Bhattacharya et al. applied deep learning to activity recognition on smartwatches, achieving superior performance with acceptable resource consumption [5]. Lee et al. proposed an intelligent notification delivery system leveraging deep learning to predict important notifications, reducing user distraction [26]. Lutze et al. focused on reinforcement learning for dialogue design and control in health-oriented smartwatch apps, determining appropriate intervention times [28]. These studies underscore the potential of ML techniques in improving user experiences and functionality in smartwatches.
The Context-Aware Experience Sampling was first introduced by Intille et al. with a tool that allowed researchers to acquire information by focusing on moments and activities of interest based on sensor-based triggers [18, 37]. Bachmann et al. demonstrated how to mitigate compliance challenges by sending event-based notifications only in situations of relevance and skipping ESM questions where assessment was possible by sensor reading [2]. Seo et al. developed an experience sampling system for context-aware mobile application development [38]. Context-Aware Experience Sampling leverages sensor technologies to increase data quality and mitigate the challenges regarding participant engagement, calling for the use of ML for optimizing the sampling strategy and exploiting emerging wearables such as smartwatches for pervasive sensor recording [2, 37].
The related work outlined above shows the potential of ML-based personalization, the impact of notification timing on response rates and engagement, the utilization of ML techniques to optimize notification delivery across mobile devices and smartwatches, and Context-Aware Experience Sampling. However, in the context of ESM, context-awareness and personalization need to account for sampling bias and preserve ecological validity while balancing intrusiveness. This is a challenge that has not been sufficiently addressed in prior work and is therefore the focus of this paper.
3 Method
Based on the premise that personalized notification delivery can produce greater engagement (i.e., higher response rate, shorter reaction time, and longer participation), we explore the design, implementation, and evaluation of a personalized ESM notification system, following edge computing practices. The process begins with an ESM study design. Then we propose a notification delivery process that satisfies a researcher-specified minimum number of notifications spread over the study days to conduct an ecologically valid ESM experiment. Subsequently, detailed data collection procedures capture user interactions and preferences, forming the foundation for ML model development. The training process employs carefully selected algorithms and metrics to refine the ability of the model to predict opportune moments of prompting. For transparency and replicability, the general trained model is open access. We then detail the personalization process, where contextual and user-specific data are leveraged to tailor notification timing for individuals. Finally, a model evaluation assesses the efficacy of the system in enhancing user engagement, utilizing quantitative measures. Through this approach, we strive to contribute to the advancement of personalized notification in ESM by ensuring data validity, fostering transparency, and evaluating both general and individual performance.
Fig. 1.
Communication flow diagram illustrating the transfer of the model from the server to the smartwatches. A global model is trained centrally, and a copy of it is sent to the smartwatches, where they are trained further locally (different colors/local model numbers indicate that training on a device is independent of other models on other devices).
To test our hypothesis we conducted a quasi-experiment in a field context, following a between-subjects design. We recruited \(N=53\) participants through convenience sampling for a 2-month experiment among students and staff of the Eindhoven University of Technology. However, only \(N=37\) participated and the rest dropped out. The study received ethical approval from the Ethics Review Board of Eindhoven University of Technology with reference number ERB2023ID457.
3.2 Procedure
We sent notifications to two groups: a) ALL: participants who received notifications in all contexts and based on the inter-notification time; and b) ML: the participants who received notifications based on the inference of the ML model. The participants were not told in which group they were in; therefore, they were not aware whether they received notifications based on intervals or based on ML inference. This was done to minimize potential biases related to their awareness of the experimental condition. The inter-notification time was set to 1 hour and 45 minutes, and the participants were asked to self-report upon receiving each notification. The questionnaire included questions related to understanding the user’s context and state of being (e.g., How happy are you right now?, and Are you currently physically active?). The analysis of the recorded responses lies outside the scope of this paper, as we solely focus on the interaction with the notification and the act of responding rather than the responses themselves.
3.3 Machine Learning
Data Collection. The system (Fig. 1) uses data extracted from our previous ESM studies for training. The dataset contains fine-grained user interaction and sensor data, including user notification response behavior and physical activity. The inclusion of these features is based on the findings of previous research that suggest that the level of physical activity and the time of day can significantly influence the response rate with a high effect size [24]. The features related to the delivery of notifications and the response times are incorporated based on psychological theories related to memory accessibility and motivation [23]. Our feature set includes: (1) the number of received notifications in 15-minute intervals; (2) movement speed in km/h, as measured by the smartwatch accelerometer; (3) the previous reaction to a notification (0 for inopportune, 1 for opportune); (4) current physical activity type (not moving, walking, running, or unknown, as detected by the smartwatch pedometer); (5) day type (working day or weekend); and (6) time of day (morning, afternoon, evening, or night). Categorical variables are encoded as one-hot vectors.
Training Process. The input of the ML model comprises a 2-dimensional tensor1 with input features as described in 3.3. The model output represents a binary classification, where 0 indicates inopportune moments for delivering notifications, and 1 indicates opportune moments. As illustrated in Fig. 1 we trained a general model with data from our previous ESM studies that predicts opportune moments (global model trained centrally). The model is then transferred to the smartwatches. Each model is updated as the user interacts with the device and the notifications (the local model trained on edge is separately trained for each individual). Fig. 1 provides an overview of the communication flow of our proposed architecture. Furthermore, the details of the training process are as follows:
Data Processing: Features are formed by combining and analyzing the user response behavior data from our previous ESM studies. The data is split into training and testing sets (90% and 10% respectively) and is stratified to ensure a balanced class distribution in both sets.
Model Architecture: The model is a sequential neural network with three hidden layers having 128, 64, and 32 neurons, each using Rectified Linear Unit (ReLU) activation. The output layer has one neuron with a sigmoid activation, typical for binary classifications. We chose a neural network based on its ability to model complex, non-linear relationships in data [7], which we believed was crucial for personalized notification timing based on user behavior data.
Class Weights: Class weights are calculated to address class imbalance [12]. The weights are based on the ratio of total samples to the number of samples for each class, giving more weight (importance) to the minority class during training.
Model Compilation: The model is compiled using the Adam optimizer with a learning rate of 0.001, binary cross-entropy loss (suitable for binary classification), and Area Under the Curve (AUC) as the evaluation metric.
Model Training: The model is trained for 20 epochs with a batch size of 8. Class weights are used during training to handle unbalanced data. Validation data are used to assess the performance of the model on unseen data during training.
Model Evaluation. Response rate, reaction time, and participation are calculated to assess the effectiveness of personalized notifications. The response rate is the ratio of the number of responses to the notifications sent. The reaction time is defined as the time between receiving a notification and starting self-reporting. Participation is calculated as the total number of daily active participants during the experiment period. These metrics are compared between a control group receiving classic ESM notifications with fixed intervals (ALL group) and a treatment group receiving personalized ML-based notifications (ML group).
A Sampling-Density-Aware Notification Delivery Process. Instead of delivering notifications immediately when the inter-notification time has passed (interval-based), randomly (signal-based), or upon occurrence of an event such as a change in location (event-based), our proposed system consults an ML model (see Fig. 1, and Algorithm 1). This model, informed by a feature vector that represents the current context, decides whether the moment is opportune for notification delivery. If not, the model is consulted every minute until an opportune moment is predicted. In case the model fails to predict an opportune moment within another inter-notification period, to reduce the bias introduced by favoring opportune moments and to guarantee sampling density (a spread of the notifications over the day in predefined time windows), a notification is sent regardless.
When a notification receives a response within a minute after delivery, the corresponding context is labeled ‘opportune’ and used for retraining the model (online feedback, one instance). If a participant responds later, the moment of delivering the notification is considered ‘inopportune’, while the response moment itself is labeled as ‘opportune’ (two instances). See Algorithm 1 for the pseudocode. These labeled vectors are used for incremental model fitting, explained in detail in the following subsections. Our implementation extends the Experiencer software [22] and is available on GitHub2.
4 Results
We collected data during a period of 2 months. The distribution between the two groups was monitored during recruitment, however, several participants in both, but mostly in the ML group, did not begin participation, which resulted in data imbalance; ML group with 10 and ALL with 27 participants.
This imbalance affects the validity of the results, and readers should interpret the findings with caution. However, we observed patterns that suggest potential insights, which we believe are valuable to share despite the possibility of confounding factors. The descriptive statistics in Table 1 summarize the response behavior of the participants.
Table 1.
Descriptive statistics
Group
N
Read Count
Received Count
Total
M
SD
Total
M
SD
ALL
27
982
36.37
27.52
1631
60.40
39.82
ML
10
474
47.40
30.70
697
69.70
42.10
In our study, the number of participants varied daily, making it essential to assess the response rate in a way that accounts for both engagement levels and the number of active participants. With a more balanced dataset and a larger, more stable participant pool, such a consideration would not be necessary. However, given the fluctuations in participation, a daily response rate calculation could misrepresent engagement, especially on days with few participants, where even a small number of responses could lead to an inflated response rate value for that day.
To address this limitation and provide a more reliable measure, we calculated the daily response rate (responses divided by notifications received), then weighted it by the number of active participants, and finally normalized it by the expected number of participants (i.e., the total number of individuals who opted in to participate and used the smartwatch at least once during the study; N in Table 1). This approach ensures that days with a larger number of active participants have a proportionally greater influence on the overall engagement measure.
where \( d \) is the current day, \( n_d \) is the number of active participants on day \( d \), \(\text {responses}_{i,d}\) is the number of responses by participant \( i \) on day \( d \), \(\text {notifications received}_{i,d}\) is the number of notifications received by participant \( i \) on day \( d \), and \( N \) is the expected number of participants.
By applying this weighted method, we mitigate the distortions caused by daily fluctuations in participation. This provides a more accurate assessment of participant engagement and ensures that trends reflect actual patterns rather than being skewed by days with particularly low or high participation.
A Welch’s t-test was conducted to compare the mean response rates between the two groups. This test was chosen due to the class imbalance and the different sample sizes between the groups.
The mean response rate of the ML group was found 10.2% higher than that of the ALL group, indicating a better response rate when personalization is applied. But there were no statistically significant differences in the mean response rates between the ALL group (\(M = 0.57\), \(SD = 0.17\), \(n = 27\)) and the ML group (\(M = 0.67\), \(SD = 0.11\), \(n = 10\)), \(t(19.87) = -1.98\), \(p = .057\). The effect size, calculated using Cohen’s d, was \(d = -0.62\). The descriptive means indicate promising trends that suggest potential improvements with our ML-based approach.
Fig. 2.
Day-by-day mean response rate comparison between the groups.
The mean reaction time for each group was also calculated (see Fig. 3). The mean reaction time of the ML group to notification was found 9.6% faster than that of the ALL group, which could indicate the positive impact of personalization.
To compare the mean reaction times between the two groups, a Welch’s t-test was conducted. The results indicated that there were no statistically significant differences in the mean reaction times between the ALL group (\(M = 12382.07\), \(SD = 1529.86\), \(n = 27\)) and the ML group (\(M = 11246.22\), \(SD = 1441.77\), \(n = 10\)), \(t(19.64) = 1.98\), \(p = .062\). The effect size, calculated using Cohen’s d, was \(d = 0.73\) (see Fig. 2).
Fig. 3.
Day-by-day mean reaction time comparison between the groups.
Although our initial statistical tests did not reveal significant differences at the conventional \(\alpha =0.05\) threshold, the descriptive statistics suggested that the ML-based approach had potential, with trends indicating higher response rates and faster reaction times. However, the imbalance between groups reduced the statistical power of our tests, increasing the likelihood of a Type II error, i.e., failing to detect an effect that may exist.
These findings suggest that with a larger, more balanced sample, stronger statistical evidence may emerge. While our results should be interpreted with caution, they provide preliminary insights that warrant further investigation. To support future research, we have shared our source code, encouraging replication and studies with increased participation.
5 Discussion and Future Work
The class imbalance in our dataset, with negative labels outnumbering positive ones approximately fivefold, presents a challenge that must be carefully considered in model training. To mitigate the impact of this imbalance, we employed techniques such as oversampling the minority class and incorporating class weights during training. While these methods help improve model robustness, they do not fully eliminate potential biases. Future research would benefit from larger, more balanced datasets to further validate these findings. Additionally, academic research should prioritize open data-sharing practices, particularly for interaction-related datasets, to facilitate the creation of larger, more diverse datasets that collectively improve model training and generalizability.
We utilized a server-edge architecture for our online supervised learning, which provides flexibility for real-time adaptation. While our approach shows promise, privacy considerations remain crucial. Federated learning offers a potential alternative that could enhance data privacy while still enabling centralized model training [20]. Although not yet widely adopted in ESM tools, transmitting anonymized model parameters instead of raw data could support collaborative model training across distributed edge devices. Future research should explore the feasibility and trade-offs of this approach in real-world ESM applications.
Although our current method involves online supervised learning on smartwatches, the flexibility of our system allows for continuous or interval-based model updates. This adaptability could help capture evolving user behavior patterns and improve the responsiveness of notification delivery strategies. However, the effectiveness of such updates depends on the stability of behavior patterns over time, and further investigation is needed to determine optimal update intervals and mechanisms.
Future work could also explore meta-analysis techniques by clustering local models from different devices and aggregating them based on participant characteristics. Linking these clusters to traits collected through intake surveys could enable more personalized notification strategies while mitigating the cold start problem. For instance, new study participants could receive a model aligned with their characteristics rather than a generic global model. While this approach holds potential, its effectiveness in real-world scenarios remains to be fully evaluated.
A lightweight neural network was trained as a general model and deployed on smartwatches for online training. While this model was selected for its ability to capture potential non-linearities in individual user behavior, exploring more interpretable and explainable models, such as random forests, remains a valuable direction for future research in this domain.
The choice of online supervised learning in our study was motivated by the availability of labeled data from previous ESM studies, enabling deterministic labeling of feature vectors based on user interactions. However, as we consider expanding the feature set in future work to incorporate additional sensors and complex feature interactions, Reinforcement Learning (RL) presents an alternative that may be better suited for decision-making in uncertain environments. RL could allow models to learn optimal notification strategies through interaction rather than relying on predetermined labels [40]. However, its application in ESM settings would require careful design, particularly in defining reward functions that align with research objectives.
In this study, we incorporated logic to explicitly enforce inter-notification time constraints, ensuring appropriate sampling density. If reinforcement learning were to be applied, these constraints should be embedded in the reward function. Rather than optimizing for response rates alone, the reward function should also account for the required notification frequency and spacing to maintain the ecological validity of the ESM approach. Specifically, the model should be incentivized to send notifications in alignment with predefined intervals while discouraging excessive clustering or overly sparse sampling. Implementing such an approach would require further experimentation to balance exploration, exploitation, and compliance with study requirements.
While our findings and methodological choices are subject to limitations, they provide valuable insights into optimizing notification delivery in ESM studies. The software and techniques introduced here demonstrate the potential for more adaptive, data-driven approaches. We encourage future research to build upon these findings, particularly through larger-scale studies that address the current constraints and further explore the integration of reinforcement learning and privacy-preserving techniques.
6 Conclusion
This paper addresses the pressing issue of notification timing in Experience Sampling Method (ESM) studies, where participant engagement often wanes due to frequent and poorly timed notifications. By leveraging machine learning (ML) methods, particularly through server-edge architecture, we demonstrated the potential to enhance notification delivery strategies. Our findings highlighted the effectiveness of online supervised learning on smartwatches, enforced with additional windowing logic to attain ecological validity. While our data was too small to demonstrate the gains convincingly, the results show promising trends and call for more extensive validation studies. We discussed the importance of addressing dataset imbalance and the potential of federated learning for preserving user privacy while still enabling collaborative model training. Future research should explore meta-analysis techniques for personalized notification strategies, simulating participant behavior for improved model training, and leveraging reinforcement learning to optimize notification timing in dynamic environments. Our study underscored the promise of ML-driven approaches in enhancing the quality and relevance of context-aware ESM data collection, ultimately advancing our understanding of human behavior in real-world contexts.
Acknowledgment
This project was financed by the Dutch Research Council (NWO), grant number 628.011.214.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Avraham Bahir, R., Parmet, Y., Tractinsky, N.: Effects of visual enhancements and delivery time on receptivity of mobile push notifications. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–6 (2019)
2.
Bachmann, A., et al.: ESMAC: a web-based configurator for context-aware experience sampling apps in ambulatory assessment. In: Proceedings of the 5th EAI International Conference on Wireless Mobile Communication and Healthcare, pp. 15–18 (2015)
3.
Balebako, R., Schaub, F., Adjerid, I., Acquisti, A., Cranor, L.: The impact of timing on the salience of smartphone app privacy notices. In: Proceedings of the 5th Annual ACM CCS Workshop on Security and Privacy in Smartphones and Mobile Devices, pp. 63–74 (2015)
4.
van Berkel, N., Goncalves, J., Lovén, L., Ferreira, D., Hosio, S., Kostakos, V.: Effect of experience sampling schedules on response rate and recall accuracy of objective self-reports. Int. J. Hum. Comput. Stud. 125, 118–128 (2019)CrossRef
5.
Bhattacharya, S., Lane, N.D.: From smart to deep: robust activity recognition on smartwatches using deep learning. In: 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), pp. 1–6. IEEE (2016)
6.
Bidargaddi, N., et al.: To prompt or not to prompt? A microrandomized trial of time-varying push notifications to increase proximal engagement with a mobile health app. JMIR Mhealth Uhealth 6(11), e10123 (2018)CrossRef
7.
Bishop, C.M.: Neural networks for pattern recognition. Oxford University Press (1995)
8.
Csikszentmihalyi, M., Larson, R., et al.: Flow and the foundations of positive psychology, vol. 10. Springer (2014)
9.
De Russis, L., Monge Roffarello, A.: On the benefit of adding user preferences to notification delivery. In: Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 1561–1568 (2017)
10.
Gonul, S., Namli, T., Baskaya, M., Sinaci, A.A., Cosar, A., Toroslu, I.H.: Optimization of just-in-time adaptive interventions using reinforcement learning. In: Recent Trends and Future Technology in Applied Intelligence: 31st International Conference on Industrial Engineering and Other Applications of Applied Intelligent Systems, IEA/AIE 2018, Montreal, QC, Canada, June 25-28, 2018, Proceedings 31, pp. 334–341. Springer (2018)
11.
Gouveia, R., Karapanos, E.: Footprint tracker: supporting diary studies with lifelogging. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2921–2930 (2013)
12.
He, H., Garcia, E.A.: Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 21(9), 1263–1284 (2009)CrossRef
13.
Hernandez, J., McDuff, D., Infante, C., Maes, P., Quigley, K., Picard, R.: Wearable ESM: differences in the experience sampling method across wearable devices. In: Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, pp. 195–205 (2016)
14.
Ho, B.J., Balaji, B., Koseoglu, M., Srivastava, M.: Nurture: notifying users at the right time using reinforcement learning. In: Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 1194–1201 (2018)
15.
Hoi, S.C., Sahoo, D., Lu, J., Zhao, P.: Online learning: a comprehensive survey. Neurocomputing 459, 249–289 (2021)CrossRef
16.
Hormuth, S.E.: The sampling of experiences in situ. J. Pers. 54(1), 262–293 (1986)CrossRef
17.
Iida, M., Shrout, P.E., Laurenceau, J.P., Bolger, N.: Using diary methods in psychological research (2012)
18.
Intille, S.S., Rondoni, J., Kukla, C., Ancona, I., Bao, L.: A context-aware experience sampling tool. In: CHI’03 Extended Abstracts on Human Factors in Computing Systems, pp. 972–973 (2003)
19.
Iqbal, S.T., Bailey, B.P.: Effects of intelligent notification management on users and their tasks. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 93–102 (2008)
20.
Kairouz, P., et al.: Advances and open problems in federated learning. Found. Trends® Mach. Learn. 14(1–2), 1–210 (2021)
21.
Kapoor, A., Horvitz, E.: Experience sampling for building predictive user models: a comparative study. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 657–666 (2008)
22.
Khanshan, A., Van Gorp, P., Markopoulos, P.: Experiencer: an open-source context-sensitive wearable experience sampling tool. In: International Conference on Pervasive Computing Technologies for Healthcare, pp. 315–331. Springer (2022)
23.
Khanshan, A., Van Gorp, P., Markopoulos, P.: Simulating participant behavior in experience sampling method research. In: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1–7 (2023)
24.
Khanshan, A., Van Gorp, P., Nuijten, R., Markopoulos, P.: Assessing the influence of physical activity upon the experience sampling response rate on wrist-worn devices. Int. J. Environ. Res. Public Health 18(20), 10593 (2021)CrossRef
25.
Lathia, N., Rachuri, K.K., Mascolo, C., Rentfrow, P.J.: Contextual dissonance: design bias in sensor-based experience sampling methods. In: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 183–192 (2013)
26.
Lee, J., Kwon, J., Kim, H.: Reducing distraction of smartwatch users with deep learning. In: Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, pp. 948–953 (2016)
27.
Li, T., Haines, J.K., De Eguino, M.F.R., Hong, J.I., Nichols, J.: Alert now or never: understanding and predicting notification preferences of smartphone users. ACM Trans. Comput. Hum. Interact. 29(5), 1–33 (2023)CrossRef
28.
Lutze, R., Waldhör, K.: Improving dialogue design and control for smartwatches by reinforcement learning based behavioral acceptance patterns. In: Human-Computer Interaction. Human Values and Quality of Life: Thematic Area, HCI 2020, Held as Part of the 22nd International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings, Part III 22, pp. 75–85. Springer (2020)
29.
Markopoulos, P., Batalas, N., Timmermans, A.: On the use of personalization to enhance compliance in experience sampling. In: Proceedings of the European Conference on Cognitive Ergonomics 2015, pp. 1–4 (2015)
30.
Mehrotra, A., Hendley, R., Musolesi, M.: Prefminer: Mining user’s preferences for intelligent mobile notification management. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 1223–1234 (2016)
31.
Mehrotra, A., Hendley, R., Musolesi, M.: Interpretable machine learning for mobile notification management: an overview of prefminer. GetMobile: Mobile Comput. Commun. 21(2), 35–38 (2017)
32.
Morrison, L.G., et al.: The effect of timing and frequency of push notifications on usage of a smartphone-based stress management intervention: an exploratory trial. PLoS ONE 12(1), e0169162 (2017)CrossRef
33.
Muralidharan, A.: Near real time AI personalization for notifications at LinkedIn. In: Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, pp. 1648–1648 (2022)
34.
Okoshi, T., Tsubouchi, K., Taji, M., Ichikawa, T., Tokuda, H.: Attention and engagement-awareness in the wild: a large-scale study with adaptive notifications. In: 2017 IEEE International Conference on Pervasive Computing and Communications (PerCom), pp. 100–110. IEEE (2017)
35.
Pham, X.L., Nguyen, T.H., Hwang, W.Y., Chen, G.D.: Effects of push notifications on learner engagement in a mobile learning app. In: 2016 IEEE 16th International Conference on Advanced Learning Technologies (ICALT), pp. 90–94. IEEE (2016)
36.
Poppinga, B., Heuten, W., Boll, S.: Sensor-based identification of opportune moments for triggering notifications. IEEE Pervasive Comput. 13(1), 22–29 (2014)CrossRef
37.
Rondoni, J.C.: Context-aware experience sampling for the design and study of ubiquitous technologies. Ph.D. thesis, Massachusetts Institute of Technology (2003)
38.
Seo, J., Lee, S., Lee, G.: An experience sampling system for context-aware mobile application development. In: Design, User Experience, and Usability. Theory, Methods, Tools and Practice: First International Conference, DUXU 2011, Held as Part of HCI International 2011, Orlando, FL, USA, July 9-14, 2011, Proceedings, Part I 1, pp. 648–657. Springer (2011)
39.
Stone, A.A., Kessler, R.C., Haythomthwatte, J.A.: Measuring daily events and experiences: decisions for the researcher. J. Pers. 59(3), 575–607 (1991)CrossRef
40.
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (2018)
41.
Wang, S., Zhang, C., Kröse, B., van Hoof, H.: Optimizing adaptive notifications in mobile health interventions systems: reinforcement learning from a data-driven behavioral simulator. J. Med. Syst. 45, 1–8 (2021)CrossRef
42.
Yuan, Y., Muralidharan, A., Nandy, P., Cheng, M., Prabhakar, P.: Offline reinforcement learning for mobile notifications. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 3614–3623 (2022)
43.
Zeng, C., Cui, L., Wang, Z.: An exponential time-aware recommendation model for mobile notification services. In: Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 592–603. Springer (2017)