ABSTRACT
In 1997 Rosalind Picard introduced fundamental concepts of affect recognition [1]. Since this time, multimodal interfaces such as Brain-computer interfaces (BCIs), RGB and depth cameras, physiological wearables, multimodal facial data and physiological data have been used to study human emotion. Much of the work in this field focuses on a single modality to recognize emotion. However, there is a wealth of information that is available for recognizing emotions when incorporating multimodal data. Considering this, the aim of this workshop is to look at current and future research activities and trends for ubiquitous emotion recognition through the fusion of data from various multimodal, mobile devices.
- R. Picard, 1997. Affective Computing. MIT press Cambridge, Vol. 252. Google ScholarDigital Library
- S. Sgaosough et al. 2016. Emotion Recognition Using Mobile Phones. Intl. Conf. on e-Health Networking, Applications, and Services.Google Scholar
- G. Chittaranjan et al. 2011. Who's who with big-five: Analyzing and classifying personality traits with smartphones. 15th Annual Intl. Symp. On Wearable Computers (pp. 53--60). IEEE Google ScholarDigital Library
- Y.-A. de Montjoye et al. 2013. Predicting personality using novel mobile phone-based metrics. Intl. Conf. on Soc. Comput., Behav.-Cultural Model., and Predict. Springer (pp. 48--55). Google ScholarDigital Library
- K. Rachuri et al. 2010. EmotionSense: a mobile phone based adaptive platform for experimental social psychology research. 12th International Conference on Ubiquitous Computing (pp. 281--290). ACM. Google ScholarDigital Library
- H. Lee et al. 2012. Towards unobtrusive emotion recognition for affective social communication. Consumer Communications for Networking Conference (pp. 260--254). IEEE.Google ScholarCross Ref
- A. Bogomolov et al. 2014. Daily stress rec. from mobile phone data, weather cond., and individual traits. 22nd Intl. Conf. on Mult. (pp. 477--486). ACM. Google ScholarDigital Library
- Z. Zhang et al. 2016. Multimodal Spontaneous Emotion Corpus for Human Behavior Analysis. Computer Vision and Pattern Recognition.Google Scholar
- X. Zhang et al. 2014. BP4D-Spontaneous: A high resolution spontaneous 3D dynamic facial expression database. Image and Vision Computing.Google Scholar
- X. Zhang et al. 2013. A high-resolution spontaneous 3D dynamic facial expression database. Face and Gesture Recognition.Google Scholar
- C. Mühl, G. Chanel, B. Allison, A. Nijholt. A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges. Brain-Computer Interfaces, Vol. 1, Issue 2, Taylor & Francis, Oxford, UK, ISSN 2326-263X, 2014, 66--84.Google Scholar
- Nijholt, A. (2015, December). Multi-modal and multi-brain-computer interfaces: A review. Info., Comm. and Signal Processing (ICICS), 2015 10th International Conference on (pp. 1--5). IEEE.Google Scholar
- De la Torre et al. 2011. Facial expression analysis. Visual Analysis of Human (pp. 377--409).Google Scholar
- Y-I. Tian et al. 2001. Recognizing action units for facial expression analysis. Transactions on Pattern Analysis and Machine Intelligence (pp. 97--115). IEEE. Google ScholarDigital Library
- R. Picard. 2001. Toward machine emotional intelligence. Transactions on Pattern Analysis and Machine Intelligence (pp. 1175--1191). IEEE. Google ScholarDigital Library
Recommendations
Multimodal Emotion Recognition Algorithm for Artificial Intelligence Information System
Emotional expression is an essential and important link in human social activities and daily communication activities. Emotion recognition is a technology that uses computer technology to recognize the emotion of recognized objects. When emotion ...
Multimodal Emotion Expressions of Virtual Agents, Mimic and Vocal Emotion Expressions and Their Effects on Emotion Recognition
ACII '13: Proceedings of the 2013 Humaine Association Conference on Affective Computing and Intelligent InteractionEmotional expressions of virtual agents are widely believed to enhance the interaction with the user by utilizing more natural means of communication. However, as a result of the current technology virtual agents are often only able to produce facial ...
Comments