Skip to main content

2021 | OriginalPaper | Buchkapitel

7. Attention-Based Facial Behavior Analytics in Social Communication

verfasst von : Lezi Wang, Chongyang Bai, Maksim Bolonkin, Judee K. Burgoon, Norah E. Dunbar, V. S. Subrahmanian, Dimitris Metaxas

Erschienen in: Detecting Trust and Deception in Group Interaction

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In this study, we address a cross-domain problem of applying computer vision approaches to reason about human facial behavior when people play The Resistance game. To capture the facial behaviors, we first collect several hours of video where the participants playing The Resistance game assume the roles of deceivers (spies) vs truth-tellers (villagers). We develop a novel attention-based neural network (NN) that advances the state of the art in understanding how a NN predicts the players’ roles. This is accomplished by discovering through learning those pixels and related frames which are discriminative and contributed the most to the NN’s inference. We demonstrate the effectiveness of our attention-based approach in discovering the frames and facial Action Units (AUs) that contributed to the NN’s class decision. Our results are consistent with the current communication theory on deception.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Burgoon, J. K. (2005). Nonverbal measurement of deceit. In V. Manusov (Ed.), The sourcebook of nonverbal measures: Going beyond words (pp. 237–250). Hillsdale, NJ: Erlbaum. Burgoon, J. K. (2005). Nonverbal measurement of deceit. In V. Manusov (Ed.), The sourcebook of nonverbal measures: Going beyond words (pp. 237–250). Hillsdale, NJ: Erlbaum.
Zurück zum Zitat Burgoon, J. K. (2015). When is deceptive message production more effortful than truth-telling? a bakers dozen of moderators. In Frontiers in Psychology, 6. Burgoon, J. K. (2015). When is deceptive message production more effortful than truth-telling? a bakers dozen of moderators. In Frontiers in Psychology, 6.
Zurück zum Zitat Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 839–847). IEEE. Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 839–847). IEEE.
Zurück zum Zitat Chaudhry, A., Dokania, P. K., & Torr, P. H. (2017). Discovering class-specific pixels for weakly-supervised semantic segmentation (p. 2017). BMVC. Chaudhry, A., Dokania, P. K., & Torr, P. H. (2017). Discovering class-specific pixels for weakly-supervised semantic segmentation (p. 2017). BMVC.
Zurück zum Zitat Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., DePaulo, B. M., & Cooper, H. (2003). Cues to deception. In Cues to deception. Psychological Bulletin, 129(1), 74–118. Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., DePaulo, B. M., & Cooper, H. (2003). Cues to deception. In Cues to deception. Psychological Bulletin, 129(1), 74–118.
Zurück zum Zitat Dosovitskiy, A., & Brox, T. (2016). Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4829–4837). Dosovitskiy, A., & Brox, T. (2016). Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4829–4837).
Zurück zum Zitat Dowdall, J., Shastri, D., Pavlidis, I. T., Frank, M. G., Tsiamyrtzis, P., & Ekman, P. (2007). Imaging facial physiology for the detection of deceit. In International Journal of Computer Vision, pages 71, 2, 197–214. Dowdall, J., Shastri, D., Pavlidis, I. T., Frank, M. G., Tsiamyrtzis, P., & Ekman, P. (2007). Imaging facial physiology for the detection of deceit. In International Journal of Computer Vision, pages 71, 2, 197–214.
Zurück zum Zitat Elkins, A. Burgoon, J. K., Nunamaker, J. F. Jr., Twyman, N. W. (2014). A rigidity detection system for automated credibility assessment. In Journal of Management Information Systems, 31, 173–201. Elkins, A. Burgoon, J. K., Nunamaker, J. F. Jr., Twyman, N. W. (2014). A rigidity detection system for automated credibility assessment. In Journal of Management Information Systems, 31, 173–201.
Zurück zum Zitat Erhan, D., Bengio, Y., Courville, A., & Vincent, P. (2009). Visualizing higher-layer features of a deep network. University of Montreal, 1341(3), 1. Erhan, D., Bengio, Y., Courville, A., & Vincent, P. (2009). Visualizing higher-layer features of a deep network. University of Montreal, 1341(3), 1.
Zurück zum Zitat Friesen, W. V., Ekman P. (1975). Unmasking the face. a guide to recognizing emotions from facial clues. In Englewood Cli s, NJ: Prentice Hall, 71(2), 197–214. Friesen, W. V., Ekman P. (1975). Unmasking the face. a guide to recognizing emotions from facial clues. In Englewood Cli s, NJ: Prentice Hall, 71(2), 197–214.
Zurück zum Zitat Guerrero, L., Floyd K., Burgoon, J. K. (2010). Nonverbal communication. In Allyn Bacon. Guerrero, L., Floyd K., Burgoon, J. K. (2010). Nonverbal communication. In Allyn Bacon.
Zurück zum Zitat He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).
Zurück zum Zitat Hu, J., Shen, L., & Sun, G. (2017). Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 7. Hu, J., Shen, L., & Sun, G. (2017). Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507, 7.
Zurück zum Zitat Jensen M. L., Kruse J., Meservy T. O., Nunamaker J. F. Jr., Burgoon, J. K. (2007). Deception and intention detection. In Handbooks in Information Systems: National Security, pages Vol 2,193–214. Jensen M. L., Kruse J., Meservy T. O., Nunamaker J. F. Jr., Burgoon, J. K. (2007). Deception and intention detection. In Handbooks in Information Systems: National Security, pages Vol 2,193–214.
Zurück zum Zitat Jetley, S., Lord, N. A., Lee, N., & Torr, H. P. (2018). Learn to pay attention. arXiv preprint arXiv:1804.02391. Jetley, S., Lord, N. A., Lee, N., & Torr, H. P. (2018). Learn to pay attention. arXiv preprint arXiv:1804.02391.
Zurück zum Zitat Li, K., Wu, Z., Peng, K., Ernst, J., & Fu, Y. (2018). Tell me where to look: Guided attention inference network. CVPR. Li, K., Wu, Z., Peng, K., Ernst, J., & Fu, Y. (2018). Tell me where to look: Guided attention inference network. CVPR.
Zurück zum Zitat Mahendran, A., & Vedaldi, A. (2015). Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5188–5196). Mahendran, A., & Vedaldi, A. (2015). Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5188–5196).
Zurück zum Zitat Metaxas, D., Bourlai T., Elkins A., Burgoon, J. K. (2017). Social signals of deception and dishonesty. In Social signal processing. Cambridge, UK: Cambridge University Press, pages 404–428. Metaxas, D., Bourlai T., Elkins A., Burgoon, J. K. (2017). Social signals of deception and dishonesty. In Social signal processing. Cambridge, UK: Cambridge University Press, pages 404–428.
Zurück zum Zitat Park, J., Woo, S., Lee, J., & Kweon, I. S. (2018). Bam: bottleneck attention module. arXiv preprint arXiv:1807.06514. Park, J., Woo, S., Lee, J., & Kweon, I. S. (2018). Bam: bottleneck attention module. arXiv preprint arXiv:1807.06514.
Zurück zum Zitat Proudfoot, J. G., Wilson D., Schuetzler R., Burgoon, J. K. (2014). Patterns of nonverbal behavior associated with truth and deception: Illustrations from three experiments. In Journal of Nonverbal Behavior, pages 38, 325–354. Proudfoot, J. G., Wilson D., Schuetzler R., Burgoon, J. K. (2014). Patterns of nonverbal behavior associated with truth and deception: Illustrations from three experiments. In Journal of Nonverbal Behavior, pages 38, 325–354.
Zurück zum Zitat Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. ICCV, 1, 618–626. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. ICCV, 1, 618–626.
Zurück zum Zitat Simonyan, K., Vedaldi, A., & Zisserman A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034. Simonyan, K., Vedaldi, A., & Zisserman A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
Zurück zum Zitat Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806. Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
Zurück zum Zitat Sun, M., Farhadi, A., & Seitz, S. (2014). Ranking domain-specific highlights by analyzing edited videos. In European conference on computer vision (pp. 787–802). Springer. Sun, M., Farhadi, A., & Seitz, S. (2014). Ranking domain-specific highlights by analyzing edited videos. In European conference on computer vision (pp. 787–802). Springer.
Zurück zum Zitat Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision (pp. 4489–4497). Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2015). Learning spatiotemporal features with 3d convolutional networks. In Proceedings of the IEEE international conference on computer vision (pp. 4489–4497).
Zurück zum Zitat Twyman, N. W., Burgoon, J. K., Nunamaker, J. F., Diller, C. B. R., & Pentland, S. J. (2017). A video-based screening system for automated risk assessment using nuanced facial features. Journal of Management Information Systems, 34(4), 970–993.CrossRef Twyman, N. W., Burgoon, J. K., Nunamaker, J. F., Diller, C. B. R., & Pentland, S. J. (2017). A video-based screening system for automated risk assessment using nuanced facial features. Journal of Management Information Systems, 34(4), 970–993.CrossRef
Zurück zum Zitat Wang, F., Jiang, M, Qian, C., Yang, S., Li, C., Zhang, H., Wang X., & Tang, X. (2017). Residual attention network for image classification. arXiv preprint arXiv:1704.06904. Wang, F., Jiang, M, Qian, C., Yang, S., Li, C., Zhang, H., Wang X., & Tang, X. (2017). Residual attention network for image classification. arXiv preprint arXiv:1704.06904.
Zurück zum Zitat Wei, Y., Feng, J., Liang, X., Cheng, M., Zhao, Y., & Yan, S. (2017). Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. IEEE CVPR, 1, 3. Wei, Y., Feng, J., Liang, X., Cheng, M., Zhao, Y., & Yan, S. (2017). Object region mining with adversarial erasing: A simple classification to semantic segmentation approach. IEEE CVPR, 1, 3.
Zurück zum Zitat Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV). Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV).
Zurück zum Zitat Yang, H., Wang, B., Lin, S., Wipf, D., Guo, M., & Guo, B. (2015). Unsupervised extraction of video highlights via robust recurrent auto-encoders. In Proceedings of the IEEE international conference on computer vision (pp. 4633–4641). Yang, H., Wang, B., Lin, S., Wipf, D., Guo, M., & Guo, B. (2015). Unsupervised extraction of video highlights via robust recurrent auto-encoders. In Proceedings of the IEEE international conference on computer vision (pp. 4633–4641).
Zurück zum Zitat Yao, T., Mei, T., & Rui, Y. (2016). Highlight detection with pairwise deep ranking for first-person video summarization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 982–990). Yao, T., Mei, T., & Rui, Y. (2016). Highlight detection with pairwise deep ranking for first-person video summarization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 982–990).
Zurück zum Zitat Zafeiriou, S., Pantic, M., Burgoon, J. K., Elkins, A. (2015) Unobtrusive deception detection. In In R. Calvo, S. K. DMello, J. Gratch, A. Kappas (Eds.), The Oxford handbook of affective computing. UK: Oxford University Press. Zafeiriou, S., Pantic, M., Burgoon, J. K., Elkins, A. (2015) Unobtrusive deception detection. In In R. Calvo, S. K. DMello, J. Gratch, A. Kappas (Eds.), The Oxford handbook of affective computing. UK: Oxford University Press.
Zurück zum Zitat Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818–833). Springer. Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818–833). Springer.
Zurück zum Zitat Zhang, D., Sung, Y., Zhou, L. (2013). The e ects of group factors on deception detection performance. In Small Group Research, 44, 272–297. Zhang, D., Sung, Y., Zhou, L. (2013). The e ects of group factors on deception detection performance. In Small Group Research, 44, 272–297.
Zurück zum Zitat Zhang, X., Wei, Y., Feng, J., Yang, Y., & Huang, T. (2018). Adversarial complementary learning for weakly supervised object localization. In IEEE CVPR. Zhang, X., Wei, Y., Feng, J., Yang, Y., & Huang, T. (2018). Adversarial complementary learning for weakly supervised object localization. In IEEE CVPR.
Zurück zum Zitat Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2921–2929). Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2921–2929).
Metadaten
Titel
Attention-Based Facial Behavior Analytics in Social Communication
verfasst von
Lezi Wang
Chongyang Bai
Maksim Bolonkin
Judee K. Burgoon
Norah E. Dunbar
V. S. Subrahmanian
Dimitris Metaxas
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-54383-9_7

Premium Partner