2015 | OriginalPaper | Buchkapitel
A Distributed Architecture for Multimodal Emotion Identification
verfasst von : Marina V. Sokolova, Antonio Fernández-Caballero, María T. López, Arturo Martínez-Rodrigo, Roberto Zangróniz, José Manuel Pastor
Erschienen in: Trends in Practical Applications of Agents, Multi-Agent Systems and Sustainability
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
This paper introduces a distributed multiagent system architecture for multimodal emotion identification, which is based on simultaneous analysis of physiological parameters from wearable devices, human behaviors and activities, and facial micro-expressions. Wearable devices are equipped with electrodermal activity, electrocardiogram, heart rate, and skin temperature sensor agents. Facial expressions are monitored by a vision agent installed at the height of the human’s head. Also, the activity of the user is monitored by a second vision agent mounted overhead. The emotion is refined as a cooperative decision taken at a central agent node denominated “Central Emotion Detection Node” from the local decision offered by the three agent nodes called “Face Expression Analysis Node”, “Behavior Analysis Node” and “Physiological Data Analysis Node”. This way, the emotion identification results are outperformed through an intelligent fuzzy-based decision making technique.