Skip to main content

2019 | OriginalPaper | Buchkapitel

Integration of Driver Behavior into Emotion Recognition Systems: A Preliminary Study on Steering Wheel and Vehicle Acceleration

verfasst von : Sina Shafaei, Tahir Hacizade, Alois Knoll

Erschienen in: Computer Vision – ACCV 2018 Workshops

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The current status of the development for emotion recognition systems in cars is mostly focused on camera-based solutions which consider the face as the main input data source. Modeling behavior of the driver in automotive domain is also a challenging topic which has a great impact on developing intelligent and autonomous vehicles. In order to study the correlation between driving behavior and emotional status of the driver, we propose a multimodal system which is based on facial expressions and driver specific behavior including steering wheel usage and the change in vehicle acceleration. The aim of this work is to investigate the impact of integration of driver behavior into emotion recognition systems and to build a structure which continuously classifies the emotions in an efficient and non-intrusive manner. We consider driver behavior as the typical range of interactions with the vehicle which represents the responses to certain stimuli. To recognize facial emotions, we extract the histogram values from the key facial regions and combine them into a single vector which is then used to train a SVM classifier. Following that, using machine learning techniques and statistical methods two modules of abrupt car maneuvers counter, based on steering wheel rotation, and aggressive driver predictor, based on a variation of acceleration, are built. In the end, all three modules are combined into one final emotion classifier which is capable of predicting the emotional group of the driver with 94% of accuracy in sub-samples. For the evaluation we used a real car simulator with 8 different participants as the drivers.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Ali, M., Machot, F.A., Mosa, A.H., Kyamakya, K.: CNN based subject-independent driver emotion recognition system involving physiological signals for ADAS. In: Schulze, T., Müller, B., Meyer, G. (eds.) Advanced Microsystems for Automotive Applications 2016. LNM, pp. 125–138. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44766-7_11CrossRef Ali, M., Machot, F.A., Mosa, A.H., Kyamakya, K.: CNN based subject-independent driver emotion recognition system involving physiological signals for ADAS. In: Schulze, T., Müller, B., Meyer, G. (eds.) Advanced Microsystems for Automotive Applications 2016. LNM, pp. 125–138. Springer, Cham (2016). https://​doi.​org/​10.​1007/​978-3-319-44766-7_​11CrossRef
2.
Zurück zum Zitat Alshamsi, H., Meng, H., Li, M.: Real time facial expression recognition app development on mobile phones. In: 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 1750–1755. IEEE (2016) Alshamsi, H., Meng, H., Li, M.: Real time facial expression recognition app development on mobile phones. In: 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), pp. 1750–1755. IEEE (2016)
3.
Zurück zum Zitat Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Incremental face alignment in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1859–1866 (2014) Asthana, A., Zafeiriou, S., Cheng, S., Pantic, M.: Incremental face alignment in the wild. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1859–1866 (2014)
4.
Zurück zum Zitat Behoora, I., Tucker, C.S.: Machine learning classification of design team members’ body language patterns for real time emotional state detection. Des. Stud. 39, 100–127 (2015)CrossRef Behoora, I., Tucker, C.S.: Machine learning classification of design team members’ body language patterns for real time emotional state detection. Des. Stud. 39, 100–127 (2015)CrossRef
5.
Zurück zum Zitat Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., Weiss, B.: A database of German emotional speech. In: Ninth European Conference on Speech Communication and Technology (2005) Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W.F., Weiss, B.: A database of German emotional speech. In: Ninth European Conference on Speech Communication and Technology (2005)
6.
Zurück zum Zitat Cabrall, C., Janssen, N., Goncalves, J., Morando, A., Sassman, M., de Winter, J.: Eye-based driver state monitor of distraction, drowsiness, and cognitive load for transitions of control in automated driving. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 001981–001982. IEEE (2016) Cabrall, C., Janssen, N., Goncalves, J., Morando, A., Sassman, M., de Winter, J.: Eye-based driver state monitor of distraction, drowsiness, and cognitive load for transitions of control in automated driving. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 001981–001982. IEEE (2016)
8.
Zurück zum Zitat Chauhan, P.M., Desai, N.P.: Mel frequency cepstral coefficients (MFCC) based speaker identification in noisy environment using wiener filter. In: 2014 International Conference on Green Computing Communication and Electrical Engineering (ICGCCEE), pp. 1–5 (2014) Chauhan, P.M., Desai, N.P.: Mel frequency cepstral coefficients (MFCC) based speaker identification in noisy environment using wiener filter. In: 2014 International Conference on Green Computing Communication and Electrical Engineering (ICGCCEE), pp. 1–5 (2014)
9.
Zurück zum Zitat Chen, L.l., Zhao, Y., Ye, P.F., Zhang, J., Zou, J.Z.: Detecting driving stress in physiological signals based on multimodal feature analysis and kernel classifiers. Expert Syst. Appl. 85, 279–291 (2017)CrossRef Chen, L.l., Zhao, Y., Ye, P.F., Zhang, J., Zou, J.Z.: Detecting driving stress in physiological signals based on multimodal feature analysis and kernel classifiers. Expert Syst. Appl. 85, 279–291 (2017)CrossRef
10.
Zurück zum Zitat Corneanu, C.A., Simón, M.O., Cohn, J.F., Guerrero, S.E.: Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affect-related applications. IEEE Trans. Pattern Anal. Mach. Intell. 38(8), 1548–1568 (2016)CrossRef Corneanu, C.A., Simón, M.O., Cohn, J.F., Guerrero, S.E.: Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: history, trends, and affect-related applications. IEEE Trans. Pattern Anal. Mach. Intell. 38(8), 1548–1568 (2016)CrossRef
11.
Zurück zum Zitat Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR 2005, vol. 1, pp. 886–893. IEEE (2005) Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition CVPR 2005, vol. 1, pp. 886–893. IEEE (2005)
12.
Zurück zum Zitat Datcu, D., Rothkrantz, L.: Multimodal recognition of emotions in car environments. DCI&I 2009 (2009) Datcu, D., Rothkrantz, L.: Multimodal recognition of emotions in car environments. DCI&I 2009 (2009)
13.
Zurück zum Zitat Dhall, A., et al.: Collecting large, richly annotated facial-expression databases from movies. IEEE Multimedia 19(3), 34–41 (2012)CrossRef Dhall, A., et al.: Collecting large, richly annotated facial-expression databases from movies. IEEE Multimedia 19(3), 34–41 (2012)CrossRef
14.
Zurück zum Zitat D’mello, S.K., Kory, J.: A review and meta-analysis of multimodal affect detection systems. ACM Comput. Surv. (CSUR) 47(3), 43 (2015)CrossRef D’mello, S.K., Kory, J.: A review and meta-analysis of multimodal affect detection systems. ACM Comput. Surv. (CSUR) 47(3), 43 (2015)CrossRef
15.
Zurück zum Zitat Donia, M.M., Youssif, A.A., Hashad, A.: Spontaneous facial expression recognition based on histogram of oriented gradients descriptor. Comput. Inf. Sci. 7(3), 31 (2014) Donia, M.M., Youssif, A.A., Hashad, A.: Spontaneous facial expression recognition based on histogram of oriented gradients descriptor. Comput. Inf. Sci. 7(3), 31 (2014)
16.
Zurück zum Zitat Fan, X.A., Bi, L.Z., Chen, Z.L.: Using EEG to detect drivers’ emotion with Bayesian networks. In: 2010 International Conference on Machine Learning and Cybernetics (ICMLC), vol. 3, pp. 1177–1181. IEEE (2010) Fan, X.A., Bi, L.Z., Chen, Z.L.: Using EEG to detect drivers’ emotion with Bayesian networks. In: 2010 International Conference on Machine Learning and Cybernetics (ICMLC), vol. 3, pp. 1177–1181. IEEE (2010)
17.
Zurück zum Zitat Friesen, E., Ekman, P.: Facial action coding system: a technique for the measurement of facial movement. Palo Alto (1978) Friesen, E., Ekman, P.: Facial action coding system: a technique for the measurement of facial movement. Palo Alto (1978)
18.
Zurück zum Zitat Friesen, W.V., Ekman, P., et al.: EMFACS-7: emotional facial action coding system. Univ. Calif. San Francisco 2(36), 1 (1983) Friesen, W.V., Ekman, P., et al.: EMFACS-7: emotional facial action coding system. Univ. Calif. San Francisco 2(36), 1 (1983)
19.
Zurück zum Zitat Fung, N.C., et al.: Driver identification using vehicle acceleration and deceleration events from naturalistic driving of older drivers. In: 2017 IEEE International Symposium on Medical Measurements and Applications (MeMeA), pp. 33–38. IEEE (2017) Fung, N.C., et al.: Driver identification using vehicle acceleration and deceleration events from naturalistic driving of older drivers. In: 2017 IEEE International Symposium on Medical Measurements and Applications (MeMeA), pp. 33–38. IEEE (2017)
20.
Zurück zum Zitat Ganchev, T., Fakotakis, N., Kokkinakis, G.: Comparative evaluation of various MFCC implementations on the speaker verification task. In: Proceedings of the SPECOM, vol. 1, pp. 191–194 (2005) Ganchev, T., Fakotakis, N., Kokkinakis, G.: Comparative evaluation of various MFCC implementations on the speaker verification task. In: Proceedings of the SPECOM, vol. 1, pp. 191–194 (2005)
21.
Zurück zum Zitat Govindarajan, V., Bajcsy, R.: Human modeling for autonomous vehicles: Reachability analysis, online learning, and driver monitoring for behavior prediction (2017) Govindarajan, V., Bajcsy, R.: Human modeling for autonomous vehicles: Reachability analysis, online learning, and driver monitoring for behavior prediction (2017)
22.
Zurück zum Zitat Guerrero Rázuri, J.F., Larsson, A., Sundgren, D., Bonet, I., Moran, A.: Recognition of emotions by the emotional feedback through behavioral human poses. Int. J. Comput. Sci. Issues 12(1), 7–17 (2015) Guerrero Rázuri, J.F., Larsson, A., Sundgren, D., Bonet, I., Moran, A.: Recognition of emotions by the emotional feedback through behavioral human poses. Int. J. Comput. Sci. Issues 12(1), 7–17 (2015)
23.
Zurück zum Zitat Gutmann, M., Grausberg, P., Kyamakya, K.: Detecting human driver’s physiological stress and emotions using sophisticated one-person cockpit vehicle simulator. In: Information Technologies in Innovation Business Conference (ITIB) 2015, pp. 15–18. IEEE (2015) Gutmann, M., Grausberg, P., Kyamakya, K.: Detecting human driver’s physiological stress and emotions using sophisticated one-person cockpit vehicle simulator. In: Information Technologies in Innovation Business Conference (ITIB) 2015, pp. 15–18. IEEE (2015)
24.
Zurück zum Zitat Hammal, Z., Cohn, J.F., Heike, C., Speltz, M.L.: What can head and facial movements convey about positive and negative affect? In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 281–287. IEEE (2015) Hammal, Z., Cohn, J.F., Heike, C., Speltz, M.L.: What can head and facial movements convey about positive and negative affect? In: 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 281–287. IEEE (2015)
25.
Zurück zum Zitat Hoch, S., Althoff, F., McGlaun, G., Rigoll, G.: Bimodal fusion of emotional data in an automotive environment. In: 2005 Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005), vol. 2, p. ii–1085. IEEE (2005) Hoch, S., Althoff, F., McGlaun, G., Rigoll, G.: Bimodal fusion of emotional data in an automotive environment. In: 2005 Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005), vol. 2, p. ii–1085. IEEE (2005)
26.
Zurück zum Zitat Jones, C.M., Jonsson, I.M.: Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In: Proceedings of the 17th Australia Conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future, pp. 1–10. Computer-Human Interaction Special Interest Group (CHISIG) of Australia (2005) Jones, C.M., Jonsson, I.M.: Automatic recognition of affective cues in the speech of car drivers to allow appropriate responses. In: Proceedings of the 17th Australia Conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future, pp. 1–10. Computer-Human Interaction Special Interest Group (CHISIG) of Australia (2005)
27.
Zurück zum Zitat Kamaruddin, N., Wahab, A.: Features extraction for speech emotion. J. Comput. Methods Sci. Eng. 9(1, 2S1), 1–12 (2009)CrossRef Kamaruddin, N., Wahab, A.: Features extraction for speech emotion. J. Comput. Methods Sci. Eng. 9(1, 2S1), 1–12 (2009)CrossRef
28.
Zurück zum Zitat Kamaruddin, N., Wahab, A.: Driver behavior analysis through speech emotion understanding. In: 2010 IEEE Intelligent Vehicles Symposium (IV), pp. 238–243. IEEE (2010) Kamaruddin, N., Wahab, A.: Driver behavior analysis through speech emotion understanding. In: 2010 IEEE Intelligent Vehicles Symposium (IV), pp. 238–243. IEEE (2010)
29.
Zurück zum Zitat Katsis, C.D., Rigas, G., Goletsis, Y., Fotiadis, D.I.: Emotion recognition in car industry. In: Emotion Recognition: A Pattern Analysis Approach, pp. 515–544 (2015) Katsis, C.D., Rigas, G., Goletsis, Y., Fotiadis, D.I.: Emotion recognition in car industry. In: Emotion Recognition: A Pattern Analysis Approach, pp. 515–544 (2015)
30.
Zurück zum Zitat Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1867–1874 (2014) Kazemi, V., Sullivan, J.: One millisecond face alignment with an ensemble of regression trees. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1867–1874 (2014)
31.
Zurück zum Zitat Kensinger, E.A.: Remembering emotional experiences: the contribution of valence and arousal. Rev. Neurosci. 15(4), 241–252 (2004)CrossRef Kensinger, E.A.: Remembering emotional experiences: the contribution of valence and arousal. Rev. Neurosci. 15(4), 241–252 (2004)CrossRef
32.
Zurück zum Zitat Khalid, M., Wahab, A., Kamaruddin, N.: Real time driving data collection and driver verification using CMAC-MFCC. In: Proceeding of the 2008 International Conference on Artificial Intelligence (ICAI 2008), pp. 219–224 (2008) Khalid, M., Wahab, A., Kamaruddin, N.: Real time driving data collection and driver verification using CMAC-MFCC. In: Proceeding of the 2008 International Conference on Artificial Intelligence (ICAI 2008), pp. 219–224 (2008)
33.
Zurück zum Zitat Khan, R.A., Meyer, A., Konik, H., Bouakaz, S.: Human vision inspired framework for facial expressions recognition. In: 2012 19th IEEE International Conference on Image Processing (ICIP), pp. 2593–2596. IEEE (2012) Khan, R.A., Meyer, A., Konik, H., Bouakaz, S.: Human vision inspired framework for facial expressions recognition. In: 2012 19th IEEE International Conference on Image Processing (ICIP), pp. 2593–2596. IEEE (2012)
34.
Zurück zum Zitat Krotak, T., Simlova, M.: The analysis of the acceleration of the vehicle for assessing the condition of the driver. In: 2012 IEEE Intelligent Vehicles Symposium (IV), pp. 571–576. IEEE (2012) Krotak, T., Simlova, M.: The analysis of the acceleration of the vehicle for assessing the condition of the driver. In: 2012 IEEE Intelligent Vehicles Symposium (IV), pp. 571–576. IEEE (2012)
35.
Zurück zum Zitat Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101. IEEE (2010) Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended cohn-kanade dataset (ck+): a complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101. IEEE (2010)
36.
Zurück zum Zitat Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205. IEEE (1998) Lyons, M., Akamatsu, S., Kamachi, M., Gyoba, J.: Coding facial expressions with gabor wavelets. In: Proceedings of Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 200–205. IEEE (1998)
38.
Zurück zum Zitat Mehrabian, A., et al.: Silent Messages, vol. 8. Wadsworth, Belmont (1971) Mehrabian, A., et al.: Silent Messages, vol. 8. Wadsworth, Belmont (1971)
39.
Zurück zum Zitat Mishra, B., et al.: Facial expression recognition using feature based techniques and model based techniques: a survey. In: 2015 2nd International Conference on Electronics and Communication Systems (ICECS), pp. 589–594. IEEE (2015) Mishra, B., et al.: Facial expression recognition using feature based techniques and model based techniques: a survey. In: 2015 2nd International Conference on Electronics and Communication Systems (ICECS), pp. 589–594. IEEE (2015)
40.
Zurück zum Zitat Ooi, J.S.K., Ahmad, S.A., Chong, Y.Z., Ali, S.H.M., Ai, G., Wagatsuma, H.: Driver emotion recognition framework based on electrodermal activity measurements during simulated driving conditions. In: 2016 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES), pp. 365–369. IEEE (2016) Ooi, J.S.K., Ahmad, S.A., Chong, Y.Z., Ali, S.H.M., Ai, G., Wagatsuma, H.: Driver emotion recognition framework based on electrodermal activity measurements during simulated driving conditions. In: 2016 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES), pp. 365–369. IEEE (2016)
41.
Zurück zum Zitat Ooi, J.S.K., Ahmad, S.A., Harun, H.R., Chong, Y.Z., Ali, S.H.M.: A conceptual emotion recognition framework: stress and anger analysis for car accidents. Int. J. Veh. Saf. 9(3), 181–195 (2017)CrossRef Ooi, J.S.K., Ahmad, S.A., Harun, H.R., Chong, Y.Z., Ali, S.H.M.: A conceptual emotion recognition framework: stress and anger analysis for car accidents. Int. J. Veh. Saf. 9(3), 181–195 (2017)CrossRef
42.
Zurück zum Zitat Ouellet, S.: Real-time emotion recognition for gaming using deep convolutional network features. arXiv preprint arXiv:1408.3750 (2014) Ouellet, S.: Real-time emotion recognition for gaming using deep convolutional network features. arXiv preprint arXiv:​1408.​3750 (2014)
43.
Zurück zum Zitat Poria, S., Cambria, E., Bajpai, R., Hussain, A.: A review of affective computing: from unimodal analysis to multimodal fusion. Inf. Fusion 37, 98–125 (2017)CrossRef Poria, S., Cambria, E., Bajpai, R., Hussain, A.: A review of affective computing: from unimodal analysis to multimodal fusion. Inf. Fusion 37, 98–125 (2017)CrossRef
44.
Zurück zum Zitat Poria, S., Cambria, E., Hussain, A., Huang, G.B.: Towards an intelligent framework for multimodal affective data analysis. Neural Netw. 63, 104–116 (2015)CrossRef Poria, S., Cambria, E., Hussain, A., Huang, G.B.: Towards an intelligent framework for multimodal affective data analysis. Neural Netw. 63, 104–116 (2015)CrossRef
45.
Zurück zum Zitat Salih, H., Kulkarni, L.: Study of video based facial expression and emotions recognition methods. In: 2017 International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), pp. 692–696. IEEE (2017) Salih, H., Kulkarni, L.: Study of video based facial expression and emotions recognition methods. In: 2017 International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), pp. 692–696. IEEE (2017)
46.
Zurück zum Zitat Samanta, A., Guha, T.: On the role of head motion in affective expression. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2886–2890. IEEE (2017) Samanta, A., Guha, T.: On the role of head motion in affective expression. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2886–2890. IEEE (2017)
47.
Zurück zum Zitat Sariyanidi, E., Gunes, H., Cavallaro, A.: Automatic analysis of facial affect: a survey of registration, representation, and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1113–1133 (2015)CrossRef Sariyanidi, E., Gunes, H., Cavallaro, A.: Automatic analysis of facial affect: a survey of registration, representation, and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(6), 1113–1133 (2015)CrossRef
48.
Zurück zum Zitat Slaney, M.: Auditory toolbox version 2. interval research corporation. Indiana: Purdue University 2010, 1998-010 (1998) Slaney, M.: Auditory toolbox version 2. interval research corporation. Indiana: Purdue University 2010, 1998-010 (1998)
49.
Zurück zum Zitat Swinkels, W., Claesen, L., Xiao, F., Shen, H.: SVM point-based real-time emotion detection. In: 2017 IEEE Conference on Dependable and Secure Computing, pp. 86–92. IEEE (2017) Swinkels, W., Claesen, L., Xiao, F., Shen, H.: SVM point-based real-time emotion detection. In: 2017 IEEE Conference on Dependable and Secure Computing, pp. 86–92. IEEE (2017)
50.
Zurück zum Zitat Tawari, A., Trivedi, M.: Speech based emotion classification framework for driver assistance system. In: 2010 IEEE Intelligent Vehicles Symposium (IV), pp. 174–178. IEEE (2010) Tawari, A., Trivedi, M.: Speech based emotion classification framework for driver assistance system. In: 2010 IEEE Intelligent Vehicles Symposium (IV), pp. 174–178. IEEE (2010)
51.
Zurück zum Zitat Theagarajan, R., Bhanu, B., Cruz, A., Le, B., Tambo, A.: Novel representation for driver emotion recognition in motor vehicle videos. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 810–814. IEEE (2017) Theagarajan, R., Bhanu, B., Cruz, A., Le, B., Tambo, A.: Novel representation for driver emotion recognition in motor vehicle videos. In: 2017 IEEE International Conference on Image Processing (ICIP), pp. 810–814. IEEE (2017)
52.
Zurück zum Zitat De la Torre, F., Campoy, J., Ambadar, Z., Cohn, J.F.: Temporal segmentation of facial behavior. In: IEEE 11th International Conference on Computer Vision ICCV 2007, pp. 1–8. IEEE (2007) De la Torre, F., Campoy, J., Ambadar, Z., Cohn, J.F.: Temporal segmentation of facial behavior. In: IEEE 11th International Conference on Computer Vision ICCV 2007, pp. 1–8. IEEE (2007)
53.
Zurück zum Zitat Zadeh, A., Zellers, R., Pincus, E., Morency, L.P.: Multimodal sentiment intensity analysis in videos: facial gestures and verbal messages. IEEE Intell. Syst. 31(6), 82–88 (2016)CrossRef Zadeh, A., Zellers, R., Pincus, E., Morency, L.P.: Multimodal sentiment intensity analysis in videos: facial gestures and verbal messages. IEEE Intell. Syst. 31(6), 82–88 (2016)CrossRef
54.
Zurück zum Zitat Zhang, Y., Chen, M., Guizani, N., Wu, D., Leung, V.C.: SOVCAN: safety-oriented vehicular controller area network. IEEE Commun. Mag. 55(8), 94–99 (2017)CrossRef Zhang, Y., Chen, M., Guizani, N., Wu, D., Leung, V.C.: SOVCAN: safety-oriented vehicular controller area network. IEEE Commun. Mag. 55(8), 94–99 (2017)CrossRef
55.
Zurück zum Zitat Zhenhai, G., DinhDat, L., Hongyu, H., Ziwen, Y., Xinyu, W.: Driver drowsiness detection based on time series analysis of steering wheel angular velocity. In: 2017 9th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), pp. 99–101. IEEE (2017) Zhenhai, G., DinhDat, L., Hongyu, H., Ziwen, Y., Xinyu, W.: Driver drowsiness detection based on time series analysis of steering wheel angular velocity. In: 2017 9th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), pp. 99–101. IEEE (2017)
Metadaten
Titel
Integration of Driver Behavior into Emotion Recognition Systems: A Preliminary Study on Steering Wheel and Vehicle Acceleration
verfasst von
Sina Shafaei
Tahir Hacizade
Alois Knoll
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-21074-8_32

Premium Partner