ABSTRACT
This paper focuses on the computational analysis of the individual communication skills of participants in a group. The computational analysis was conducted using three novel aspects to tackle the problem. First, we extracted features from dialogue (dialog) act labels capturing how each participant communicates with the others. Second, the communication skills of each participant were assessed by 21 external raters with experience in human resource management to obtain reliable skill scores for each of the participants. Third, we used the MATRICS corpus, which includes three types of group discussion datasets to analyze the influence of situational variability regarding to the discussion types. We developed a regression model to infer the score for communication skill using multimodal features including linguistic and nonverbal features: prosodic, speaking turn, and head activity. The experimental results show that the multimodal fusing model with feature selection achieved the best accuracy, 0.74 in R2 of the communication skill. A feature analysis of the models revealed the task-dependent and task-independent features to contribute to the prediction performance.
- O. Aran and D. Gatica-Perez. One of a kind: Inferring personality impressions in meetings. In Proc. of ACM ICMI, pages 11–18, 2013. Google ScholarDigital Library
- L. F. Bachman. Fundamental considerations in language testing. Oxford University Press, 1990.Google Scholar
- L. Batrinca, N. Mana, B. Lepri, N. Sebe, and F. Pianesi. Multimodal personality recognition in collaborative goal-oriented tasks. IEEE Trans. on Multimedia, 18(4):659–673, 2016.Google ScholarCross Ref
- J.-I. Biel, L. Teijeiro-Mosquera, and D. Gatica-Perez. Facetube: predicting personality from facial expressions of emotion in online conversational video. In Proc. of ACM ICMI, pages 53–56, 2012. Google ScholarDigital Library
- P. Boersma and D. Weenink. Praat: doing phonetics by computer. {Computer program},Version 5.3.51, 2013.Google Scholar
- J. A. Bonito. The effect of contributing substantively on perceptions of participation. Small Group Research, 31(5):528–553, 2000.Google ScholarCross Ref
- J. Carletta, S. Garrod, and H. Fraser-Krauss. Placement of authority and communication patterns in workplace groups the consequences for innovation. Small Group Research, 29(5):531–559, 1998.Google ScholarCross Ref
- L. Chen and M. P. Harper. Multimodal floor control shift detection. In proc. of ACM ICMI-MLMI, pages 15–22, 2009. Google ScholarDigital Library
- M. G. Core and J. Allen. Coding dialogs with the damsl annotation scheme. In AAAI fall symposium on communicative action in humans and machines, pages 28–35, 1997.Google Scholar
- J. E. Driskell, B. Olmstead, and E. Salas. Task cues, dominance cues, and influence in task groups. Journal of Applied Psychology, 78(1):51, 1993.Google ScholarCross Ref
- D. Gatica-Perez. Automatic nonverbal analysis of social interaction in small groups: A review. Image Vision Computing, 27(12):1775–1787, 2009. Google ScholarDigital Library
- J. O. Greene and B. R. Burleson. Handbook of communication and social interaction skills. Psychology Press, 2003.Google ScholarCross Ref
- D. Hymes. On Communicative Competence. Harmondsworth: penguin, 1972.Google Scholar
- D. B. Jayagopi, D. Sanchez-Cortes, K. Otsuka, J. Yamato, and D. Gatica-Perez. Linking speaking and looking behavior patterns with group composition, perception, and performance. In Proc. of ACM ICMI, pages 433–440, 2012. Google ScholarDigital Library
- T. Kudo, K. Yamamoto, and Y. Matsumoto. Applying conditional random fields to japanese morphological analysis. In EMNLP, volume 4, pages 230–237, 2004.Google Scholar
- C. Lai, J. Carletta, and S. Renals. Modelling participant affect in meetings with turn-taking features. In Proc. of WASSS 2013, 2013.Google Scholar
- L. Nguyen, D. Frauendorfer, M. Mast, and D. Gatica-Perez. Hire me: Computational inference of hirability in employment interviews based on nonverbal behavior. IEEE Trans. on Multimedia, 16(4):1018–1031, 2014. Google ScholarDigital Library
- F. Nihei, Y. I. Nakano, Y. Hayashi, H.-H. Hung, and S. Okada. Predicting influential statements in group discussions using speech and head motion information. In Proc. of ACM ICMI, pages 136–143, 2014. Google ScholarDigital Library
- S. Okada, O. Aran, and D. Gatica-Perez. Personality trait classification via co-occurrent multiparty multimodal event discovery. In Proc. of ACM ICMI, pages 15–22, 2015. Google ScholarDigital Library
- S. Park, H. S. Shim, M. Chatterjee, K. Sagae, and L.-P. Morency. Computational analysis of persuasiveness in social multimedia: A novel dataset and multimodal prediction approach. In Proc. of ACM ICMI, pages 50–57, 2014. Google ScholarDigital Library
- F. Pianesi, N. Mana, A. Cappelletti, B. Lepri, and M. Zancanaro. Multimodal recognition of personality traits in social interactions. In Proc. of ACM ICMI, pages 53–60, 2008. Google ScholarDigital Library
- V. Ramanarayanan, C. W. Leong, L. Chen, G. Feng, and D. Suendermann-Oeft. Evaluating speech, face, emotion and body movement time-series features for automated multimodal presentation scoring. In Proc. of ACM ICMI, pages 23–30, 2015. Google ScholarDigital Library
- G. Rickheit and H. Strohner. Handbook of communication competence, volume 1. Walter de Gruyter, 2008.Google ScholarCross Ref
- R. E. Riggio, H. R. Riggio, C. Salinas, and E. J. Cole. The role of social and emotional communication skills in leader emergence and effectiveness. Group Dynamics: Theory, Research, and Practice, 7(2):83, 2003.Google ScholarCross Ref
- R. B. Ruback, J. M. Dabbs, and C. H. Hopper. The process of brainstorming: An analysis with individual and group vocal parameters. Journal of Personality and Social Psychology, 47(3):558, 1984.Google ScholarCross Ref
- D. Sanchez-Cortes, O. Aran, M. S. Mast, and D. Gatica-Perez. A nonverbal behavior approach to identify emergent leaders in small groups. IEEE Trans. on Multimedia, 14(3):816–832, 2012. Google ScholarDigital Library
- S. Scherer, N. Weibel, L.-P. Morency, and S. Oviatt. Multimodal prediction of expertise and leadership in learning groups. In Proc. of the International Workshop on Multimodal Learning Analytics, pages 1–8, 2012. Google ScholarDigital Library
- E. Shriberg, R. Dhillon, S. Bhagat, J. Ang, and H. Carvey. The icsi meeting recorder dialog act (mrda) corpus. In Proc. of SIGDIAL, pages 97–100, 2004.Google ScholarCross Ref
- S. Siegel and N. Castellan. Nonparametric statistics for the behavioral sciences. McGraw–Hill, Inc., second edition, 1988.Google Scholar
- A. Stolcke, N. Coccaro, R. Bates, P. Taylor, C. Van Ess-Dykema, K. Ries, E. Shriberg, D. Jurafsky, R. Martin, and M. Meteer. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3):339–373, 2000. Google ScholarDigital Library
- G. Tur and R. De Mori. Spoken language understanding: Systems for extracting semantic information from speech. John Wiley & Sons, 2011.Google ScholarCross Ref
- F. Valente, S. Kim, and P. Motlicek. Annotation and recognition of personality traits in spoken conversations from the ami meetings corpus. In Proc. of INTERSPEECH, pages 1183–1186, 2012.Google Scholar
- T. Wörtwein, M. Chollet, B. Schauerte, L.-P. Morency, R. Stiefelhagen, and S. Scherer. Multimodal public speaking performance assessment. In Proc. of ACM ICMI, pages 43–50, 2015. Google ScholarDigital Library
Index Terms
- Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets
Recommendations
Exploiting emotions to disambiguate dialogue acts
IUI '04: Proceedings of the 9th international conference on Intelligent user interfacesThis paper describes an attempt to reveal the user's intention from dialogue acts, thereby improving the effectiveness of natural interfaces to pedagogical agents. It focuses on cases where the intention is unclear from the dialogue context or utterance ...
Dialog acts in greeting and leavetaking in social talk
ISIAA 2017: Proceedings of the 1st ACM SIGCHI International Workshop on Investigating Social Interactions with Artificial AgentsConversation proceeds through dialogue moves or acts, and dialog act annotation can aid the design of artificial dialog. While many dialogs are task-based or instrumental, with clear goals, as in the case of a service encounter or business meeting, ...
Multifunctionality in dialogue
This paper studies the multifunctionality of dialogue utterances, i.e. the phenomenon that utterances in dialogue often have more than one communicative function. It is argued that this phenomenon can be explained by analyzing the participation in ...
Comments