skip to main content
10.1145/2993148.2993154acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets

Published:31 October 2016Publication History

ABSTRACT

This paper focuses on the computational analysis of the individual communication skills of participants in a group. The computational analysis was conducted using three novel aspects to tackle the problem. First, we extracted features from dialogue (dialog) act labels capturing how each participant communicates with the others. Second, the communication skills of each participant were assessed by 21 external raters with experience in human resource management to obtain reliable skill scores for each of the participants. Third, we used the MATRICS corpus, which includes three types of group discussion datasets to analyze the influence of situational variability regarding to the discussion types. We developed a regression model to infer the score for communication skill using multimodal features including linguistic and nonverbal features: prosodic, speaking turn, and head activity. The experimental results show that the multimodal fusing model with feature selection achieved the best accuracy, 0.74 in R2 of the communication skill. A feature analysis of the models revealed the task-dependent and task-independent features to contribute to the prediction performance.

References

  1. O. Aran and D. Gatica-Perez. One of a kind: Inferring personality impressions in meetings. In Proc. of ACM ICMI, pages 11–18, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. L. F. Bachman. Fundamental considerations in language testing. Oxford University Press, 1990.Google ScholarGoogle Scholar
  3. L. Batrinca, N. Mana, B. Lepri, N. Sebe, and F. Pianesi. Multimodal personality recognition in collaborative goal-oriented tasks. IEEE Trans. on Multimedia, 18(4):659–673, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  4. J.-I. Biel, L. Teijeiro-Mosquera, and D. Gatica-Perez. Facetube: predicting personality from facial expressions of emotion in online conversational video. In Proc. of ACM ICMI, pages 53–56, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. P. Boersma and D. Weenink. Praat: doing phonetics by computer. {Computer program},Version 5.3.51, 2013.Google ScholarGoogle Scholar
  6. J. A. Bonito. The effect of contributing substantively on perceptions of participation. Small Group Research, 31(5):528–553, 2000.Google ScholarGoogle ScholarCross RefCross Ref
  7. J. Carletta, S. Garrod, and H. Fraser-Krauss. Placement of authority and communication patterns in workplace groups the consequences for innovation. Small Group Research, 29(5):531–559, 1998.Google ScholarGoogle ScholarCross RefCross Ref
  8. L. Chen and M. P. Harper. Multimodal floor control shift detection. In proc. of ACM ICMI-MLMI, pages 15–22, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. M. G. Core and J. Allen. Coding dialogs with the damsl annotation scheme. In AAAI fall symposium on communicative action in humans and machines, pages 28–35, 1997.Google ScholarGoogle Scholar
  10. J. E. Driskell, B. Olmstead, and E. Salas. Task cues, dominance cues, and influence in task groups. Journal of Applied Psychology, 78(1):51, 1993.Google ScholarGoogle ScholarCross RefCross Ref
  11. D. Gatica-Perez. Automatic nonverbal analysis of social interaction in small groups: A review. Image Vision Computing, 27(12):1775–1787, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. O. Greene and B. R. Burleson. Handbook of communication and social interaction skills. Psychology Press, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  13. D. Hymes. On Communicative Competence. Harmondsworth: penguin, 1972.Google ScholarGoogle Scholar
  14. D. B. Jayagopi, D. Sanchez-Cortes, K. Otsuka, J. Yamato, and D. Gatica-Perez. Linking speaking and looking behavior patterns with group composition, perception, and performance. In Proc. of ACM ICMI, pages 433–440, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. T. Kudo, K. Yamamoto, and Y. Matsumoto. Applying conditional random fields to japanese morphological analysis. In EMNLP, volume 4, pages 230–237, 2004.Google ScholarGoogle Scholar
  16. C. Lai, J. Carletta, and S. Renals. Modelling participant affect in meetings with turn-taking features. In Proc. of WASSS 2013, 2013.Google ScholarGoogle Scholar
  17. L. Nguyen, D. Frauendorfer, M. Mast, and D. Gatica-Perez. Hire me: Computational inference of hirability in employment interviews based on nonverbal behavior. IEEE Trans. on Multimedia, 16(4):1018–1031, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. F. Nihei, Y. I. Nakano, Y. Hayashi, H.-H. Hung, and S. Okada. Predicting influential statements in group discussions using speech and head motion information. In Proc. of ACM ICMI, pages 136–143, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Okada, O. Aran, and D. Gatica-Perez. Personality trait classification via co-occurrent multiparty multimodal event discovery. In Proc. of ACM ICMI, pages 15–22, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. Park, H. S. Shim, M. Chatterjee, K. Sagae, and L.-P. Morency. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proc. of ACM ICMI, pages 50–57, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. F. Pianesi, N. Mana, A. Cappelletti, B. Lepri, and M. Zancanaro. Multimodal recognition of personality traits in social interactions. In Proc. of ACM ICMI, pages 53–60, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. V. Ramanarayanan, C. W. Leong, L. Chen, G. Feng, and D. Suendermann-Oeft. Evaluating speech, face, emotion and body movement time-series features for automated multimodal presentation scoring. In Proc. of ACM ICMI, pages 23–30, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. G. Rickheit and H. Strohner. Handbook of communication competence, volume 1. Walter de Gruyter, 2008.Google ScholarGoogle ScholarCross RefCross Ref
  24. R. E. Riggio, H. R. Riggio, C. Salinas, and E. J. Cole. The role of social and emotional communication skills in leader emergence and effectiveness. Group Dynamics: Theory, Research, and Practice, 7(2):83, 2003.Google ScholarGoogle ScholarCross RefCross Ref
  25. R. B. Ruback, J. M. Dabbs, and C. H. Hopper. The process of brainstorming: An analysis with individual and group vocal parameters. Journal of Personality and Social Psychology, 47(3):558, 1984.Google ScholarGoogle ScholarCross RefCross Ref
  26. D. Sanchez-Cortes, O. Aran, M. S. Mast, and D. Gatica-Perez. A nonverbal behavior approach to identify emergent leaders in small groups. IEEE Trans. on Multimedia, 14(3):816–832, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. S. Scherer, N. Weibel, L.-P. Morency, and S. Oviatt. Multimodal prediction of expertise and leadership in learning groups. In Proc. of the International Workshop on Multimodal Learning Analytics, pages 1–8, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. E. Shriberg, R. Dhillon, S. Bhagat, J. Ang, and H. Carvey. The icsi meeting recorder dialog act (mrda) corpus. In Proc. of SIGDIAL, pages 97–100, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  29. S. Siegel and N. Castellan. Nonparametric statistics for the behavioral sciences. McGraw–Hill, Inc., second edition, 1988.Google ScholarGoogle Scholar
  30. A. Stolcke, N. Coccaro, R. Bates, P. Taylor, C. Van Ess-Dykema, K. Ries, E. Shriberg, D. Jurafsky, R. Martin, and M. Meteer. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339–373, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. G. Tur and R. De Mori. Spoken language understanding: Systems for extracting semantic information from speech. John Wiley & Sons, 2011.Google ScholarGoogle ScholarCross RefCross Ref
  32. F. Valente, S. Kim, and P. Motlicek. Annotation and recognition of personality traits in spoken conversations from the ami meetings corpus. In Proc. of INTERSPEECH, pages 1183–1186, 2012.Google ScholarGoogle Scholar
  33. T. Wörtwein, M. Chollet, B. Schauerte, L.-P. Morency, R. Stiefelhagen, and S. Scherer. Multimodal public speaking performance assessment. In Proc. of ACM ICMI, pages 43–50, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ICMI '16: Proceedings of the 18th ACM International Conference on Multimodal Interaction
          October 2016
          605 pages
          ISBN:9781450345569
          DOI:10.1145/2993148

          Copyright © 2016 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 31 October 2016

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate453of1,080submissions,42%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader