1 Introduction: Content Personalization as a Key Strategic Element
2 Methodology
2.1 Literature Search
2.2 Keywords, Inclusion and Exclusion Criteria
3 Background
3.1 Recommender Systems as a Content Personalization Tool
-
Collaborative filtering. Collaborative filtering was one of the first personalization technologies which became widely available within e-Commerce domain (Montgomery and Smith 2009). Collaborative filtering does not require the user’s explicit profile (Koren 2010) and, in the online retail context, collaborative filtering generates recommendations by predicting the utility of particular items to a specific user based on user votes retrieved from a user database. In order to generate more specific suggestions of products or services, items or services need to be rated by many customers (Breese et al. 1998).
-
Content-based filtering. Content-based filtering analyses the content of information sources and creates a user profile based on the customer’s interests (past searches, item rating or preferences about specific goods) in terms of regularities of the items and services that have been rated highly (Pazzani 1999). Content-based filtering then relies on a user profile and recommends those items which match with the customer’s needs and preferences reflected in the customer profile (Uçar and Karahoca 2015).
-
Hybrid recommender systems. The previously introduced approaches may be combined in hybrid systems to complement and eliminate drawbacks of each other (Uçar and Karahoca 2015). For instance, collaborative filtering is based upon ratings on items made by other customers, what makes implementation of the new recommender system challenging as these data are not yet available. Leveraging of content-based filtering on top of collaborative filtering will help make a deep analysis of user profiles what will eliminate the generation of immature suggestions by collaborative filtering-based recommender system in the early stages (Tran and Cohen 2000).
3.2 Emotions as a Contextual Variable in Content Personalization
4 Results
4.1 Strategies to Identify User Emotions
Stage | Description | Data acquisition method | Key contributors |
---|---|---|---|
Entry stage | When starting using the recommender system, the user is in the entry mood caused by previous activities and actions unknown to the system, which, however, have an impact on the user’s decisions | • Matrix factorization approach with emotion-specific regularization enriched with contextual parameters | • Porayska-Pomsta et al. (2007) • Baltrunas (2008) • Koren (2010) • Shi et al. (2010) |
Consumption stage | During the content consumption, user experiences emotional responses activated by the
content which, however, do not induce any further actions, but rather is a passive response to the stimuli | • Explicit data can be collected through paper-and-pencil questionnaire and to be categorized according to the six universal emotions: happiness, sadness, surprise, fear, disgust, anger (Ekman and Rosenberg 1993) • Implicit emotional data could be collected through monitoring the facial or voice expressions of users with the help of specific emotions recognition tools and technologies (Polignano 2015) | • Pantic and Vinciarelli (2009) • Arapakis et al. (2009) • Soleymani et al. (2012) • Joho et al. (2011) |
Exit stage | The emotions induced by the content during the exit stage will influence the user’s further actions | • The implicit data on users’ emotions during the exit stage can be collected by using facial or voice expressions recognition technologies as well as heart rate sensors in certain domains such as physical interactive playgrounds | • Arapakis et al. (2009) • Yannakakis et al. (2008) • Soleymani et al. (2011) |
4.2 Emotions Recognition from Facial Expressions
-
nViso. Developed by the Swiss company, this technology can detect six basic emotional groups, described by Ekman and Rosenberg (1993) by using a proprietary deep learning algorithm. According to the data provided on the website, nViso can capture emotions of one person or a group of people in real time with the help of the webcam, which tracks muscle movements of the face (nViso 2018). nViso partnered with IBM to create an emotional intelligence cloud solution capable of analyzing facial expressions to enable financial advisors better understand their clients’ financial needs (IBM 2018). Furthermore, together with ePAT Technologies, nViso actively works on a smartphone-based medical device which is able to assess pain levels in real time by analyzing facial muscle movements of patients (IBM 2018; ePAT Technologies Ltd. 2017).
-
Affectiva. By analyzing emotions from the twenty facial zones retrieved from a database of videos and images (Affectiva 2018a), this emotion recognition software can detect seven emotions (anger, contempt, disgust, fear, happiness, sadness and surprise) as well as measure valence and arousal of the person (Affectiva 2018b). Affectiva has recently partnered with Voxpopme, which is a global emotion recognition software provider, to work on the platform to enable advanced analysis of facial expressions within video feedback (Busines Wire 2017). Furthermore, Affectiva helps clients develop analytics solutions in multiple domains, including healthcare, education, media and advertising, retail, and gaming (Affectiva 2018a).
-
EmoVu by Eyeris. This software solution exploits the deep learning algorithms which retrieve data from large datasets about people of various ages, ethnicities, genders, etc. EmoVu can recognize such emotions as anger, disgust, fear, happiness, neutral, sadness, and surprise and can also measure the degree of arousal and valence (Eyeris 2018a). Eyeris is mostly specialized in the development of facial analytics and emotion recognition technology for the automotive sector and the most prominent Eyeris’s customers include Toyota Motor Corporation and Honda Motor Co. (Eyeris 2017). Furthermore, Eyeris has recently partnered with AvatarMind, the creator of iPal® Robot, a humanoid robot which serves as a social companion, educator, and safety monitor for children and elderly (Eyeris 2018b).
-
Kairos. This technology provides data about persons’ six emotions, level of attention, and sentiment based on the analyzed videos or images. Furthermore, the services provided by Kairos include age, ethnicity, and gender identification as well as group faces recognition and detection (Kairos 2018b). The emotion recognition software, provided by Kairos was implemented by companies such as The Interpublic Group of Companies, Legendary Entertainment, PepsiCo, etc. operating in multiple domains including advertising and media, retail, and banking and insurance (Kairos 2018a).
-
Microsoft Cognitive Services. The Cognitive Services Pack provided by Microsoft can identify the face and emotional expressions of people after processing pictures and videos. This software identifies six basic emotional groups described by Ekman and Rosenberg (1993) as well as contempt and neutrality (Microsoft Microsoft 2018a). Microsoft provides its Cognitive Services to the businesses involved in manufacturing, healthcare, media and telecommunications, education, banking and insurance, retail, etc. The featured clients are ABB Group, Daimler AG, Allergan, Telefonica, etc. (Microsoft 2018b).
-
FaceReader by Noldus. This automatic recognition software can analyze up to 500 facial points to recognize such emotions as neutral, contempt, boredom, interest, and confusion. Furthermore, FaceReader also calculates gaze direction, head orientation, and person characteristics (Noldus 2018a). The Noldus’s clients mostly involved in healthcare, retail, and education services and include such companies as Pfizer, GlaxoSmithKline, Carnegie Mellon University, University of Maryland, Johnson & Johnson, etc. (Noldus 2018b).
4.3 Emotions Recognition from Speech
-
Vokaturi. The Amsterdam-based company developed a solution, which can measure whether people are happy, sad, afraid, angry, or have a neutral state of mind directly from their voice. Vokaturi has been validated with the multiple existing emotion databases and works in a language-independent manner (Vokaturi 2018). Vokaturi has recently established a partnership with Affectiva with the purpose of joining efforts to work on the emotion-sensing product into the autonomous vehicles sector (Affectiva 2018).
-
Good Vibrations Company B.V. This solution can recognize the emotions of the person by processing recorded voice. Good Vibrations measures acoustic properties of the user’s voice and performs a real-time analysis of the user’s emotions to recognize stress, pleasure, and arousal (Good Vibrations Company B.V. 2018). According to the official website of Good Vibrations Company B.V., the areas where the developed solution has the greatest potential include healthcare, advertising, gaming, sports, business, robotics, safety, and matching (Good Vibrations Company B.V. 2018).
-
audEERING. This Munich-based tech company developed intelligent audio analysis algorithms to help organizations integrate audio analysis technology into their products. The embedded automated paralinguistic speech analysis allows detecting a multitude of attributes from the human voice, such as emotions and affective states (valence, arousal, dominance), age, alertness, or personality (audEERING 2018a). audEERING’c clients represent multiple domains such as manufacturing, telecommunications, education, retail and include BMW, Daimler, T-Mobile, Deutsche Welle, Huawei, etc. (audEERING 2018b).
-
Beyond Verbal. The solution, provided by this Israel-based company, is capable of extracting multiple acoustic features from a speaker’s voice in real time and providing insights into the emotional, health, and wellbeing condition of the user. By utilizing voice-driven emotions analytics, the technology can recognize anger, sadness, neutral, and happiness as well as can measure valence, arousal, and temper in the voice of the speaker. Beyond Verbal’s clients mostly represent companies from retail, and media and marketing sectors and include such companies as Amdoc, FRONTLINE Selling, and Department26 (Beyond Verbal 2018).
-
Nemesysco. The company provides advanced voice analysis technologies for emotion detection, personality, and risk assessment. The technology is based on the proprietary signal-processing algorithms which can extract over 150 acoustic parameters from voice and classify the collected properties into major emotional groups including anger, happiness, satisfaction, and arousal. The key domains currently covered by Nemesysco are retail, banking and insurance, and security. Among the key customers we featured Nestle, Allianz, and Europ Assistance (Nemesysco 2018).
Emotions & Application fields | Facial expressions recognition technologies | Speech expressions recognition technologies | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
nViso | Affectiva | EmoVu | Kairos | MS Cognitive Services | FaceReader | Vokaturi | Good Vibrations | audEERING | Beyond Verbal | Nemesysco | |
Emotions
| |||||||||||
Anger | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||
Disgust | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||
Fear | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||
Happiness | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||
Sadness | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||
Surprise | ✓ | ✓ | ✓ | ✓ | |||||||
Neutral | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ||||
Contempt | ✓ | ✓ | ✓ | ||||||||
Boredom | ✓ | ||||||||||
Interest | ✓ | ||||||||||
Confusion | ✓ | ||||||||||
Satisfaction | ✓ | ||||||||||
Valence | ✓ | ✓ | ✓ | ✓ | |||||||
Arousal | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | |||||
Dominance | ✓ | ||||||||||
Application fields
| |||||||||||
Banking & Insurance | ✓ | ✓ | ✓ | ✓ | |||||||
Healthcare | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||
Education | ✓ | ✓ | ✓ | ✓ | |||||||
Gaming | ✓ | ✓ | |||||||||
Security & Safety | ✓ | ✓ | |||||||||
Robotics | ✓ | ✓ | |||||||||
Manufacturing | ✓ | ✓ | ✓ | ✓ | |||||||
Media & Telecommunications | ✓ | ✓ | ✓ | ✓ | ✓ | ||||||
Retail & Marketing | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |