Skip to main content

2024 | OriginalPaper | Buchkapitel

Multimodal Sentiment Analysis and Multimodal Emotion Analysis: A Review

verfasst von : Soumya Sharma, Srishti Sharma, Deepak Gupta

Erschienen in: Proceedings of Third International Conference on Computing and Communication Networks

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Sentiment analysis, or opinion mining, is defined as the process of identifying the overall outlook of a person with respect to any event/entity. Emotion analysis, or affect analysis, is defined as the process of identifying the underlying feeling of a person towards an event/entity. A sentiment can be categorized into 3 categories; an emotion, however, can have multiple categories. Also, an emotion can have intensity, which a sentiment does not have. The following paper presents a review of the various multimodal sentiment analysis (MSA) and multimodal emotion analysis (MEA) techniques. Challenges and future scope are also discussed at the end of the paper.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Sehar, U., Kanwal, S., Dashtipur, K., Mir, U., Abbasi, U., Khan, F.: Urdu sentiment analysis via multimodal data mining based on deep learning algorithms. IEEE Access 9, 153072–153082 (2021)CrossRef Sehar, U., Kanwal, S., Dashtipur, K., Mir, U., Abbasi, U., Khan, F.: Urdu sentiment analysis via multimodal data mining based on deep learning algorithms. IEEE Access 9, 153072–153082 (2021)CrossRef
2.
Zurück zum Zitat Dashtipour, K., Gogate, M., Cambria, E., Hussain, A.: A novel context-aware multimodal framework for persian sentiment analysis. Neurocomputing 457, 377–388 (2021)CrossRef Dashtipour, K., Gogate, M., Cambria, E., Hussain, A.: A novel context-aware multimodal framework for persian sentiment analysis. Neurocomputing 457, 377–388 (2021)CrossRef
3.
Zurück zum Zitat Wu, Y., Zhao, Y., Yang, H., Chen, S., Qin, B., Cao, X., Zhao, W.: Sentiment word aware multimodal refinement for multimodal sentiment analysis with ASR errors (2022). arXiv preprint arXiv:2203.00257 Wu, Y., Zhao, Y., Yang, H., Chen, S., Qin, B., Cao, X., Zhao, W.: Sentiment word aware multimodal refinement for multimodal sentiment analysis with ASR errors (2022). arXiv preprint arXiv:​2203.​00257
4.
Zurück zum Zitat Gandhi, A., Adhvaryu, K., Poria, S., Cambria, E., Hussain, A.: Multimodal sentiment analysis: a systematic review of history, datasets, multimodal fusion methods, applications, challenges and future directions. Inform. Fusion 91, 424–444 (2022) Gandhi, A., Adhvaryu, K., Poria, S., Cambria, E., Hussain, A.: Multimodal sentiment analysis: a systematic review of history, datasets, multimodal fusion methods, applications, challenges and future directions. Inform. Fusion 91, 424–444 (2022)
5.
Zurück zum Zitat Wu, J., Zhu, T., Zheng, X., Wang, C.: Multi-modal sentiment analysis based on interactive attention mechanism. Appl. Sci. 12(16), 8174 (2022)CrossRef Wu, J., Zhu, T., Zheng, X., Wang, C.: Multi-modal sentiment analysis based on interactive attention mechanism. Appl. Sci. 12(16), 8174 (2022)CrossRef
6.
Zurück zum Zitat Lee, S., Han, D.K., Ko, H.: Multimodal emotion recognition fusion analysis adapting BERT with heterogeneous feature unification. IEEE Access 9, 94557–94572 (2021)CrossRef Lee, S., Han, D.K., Ko, H.: Multimodal emotion recognition fusion analysis adapting BERT with heterogeneous feature unification. IEEE Access 9, 94557–94572 (2021)CrossRef
7.
Zurück zum Zitat Qi, Q., Lin, L., Zhang, R., Xue, C.: MEDT: using multimodal encoding-decoding network as in transformer for multimodal sentiment analysis. IEEE Access 10, 28750–28759 (2022)CrossRef Qi, Q., Lin, L., Zhang, R., Xue, C.: MEDT: using multimodal encoding-decoding network as in transformer for multimodal sentiment analysis. IEEE Access 10, 28750–28759 (2022)CrossRef
8.
Zurück zum Zitat Xu, M., Liang, F., Su, X., Fang, C.: CMJRT: cross-modal joint representation transformer for multimodal sentiment analysis. IEEE Access 10, 131671–131679 (2022)CrossRef Xu, M., Liang, F., Su, X., Fang, C.: CMJRT: cross-modal joint representation transformer for multimodal sentiment analysis. IEEE Access 10, 131671–131679 (2022)CrossRef
9.
Zurück zum Zitat Ghosh, A., Dhara, B.C., Pero, C., Umer, S.: A multimodal sentiment analysis system for recognizing person aggressiveness in pain based on textual and visual information. J. Ambient Intell. Humaniz. Comput. 14(4), 4489–4501 (2023)CrossRef Ghosh, A., Dhara, B.C., Pero, C., Umer, S.: A multimodal sentiment analysis system for recognizing person aggressiveness in pain based on textual and visual information. J. Ambient Intell. Humaniz. Comput. 14(4), 4489–4501 (2023)CrossRef
10.
Zurück zum Zitat Cimtay, Y., Ekmekcioglu, E., Caglar-Ozhan, S.: Cross-subject multimodal emotion recognition based on hybrid fusion. IEEE Access 8, 168865–168878 (2020)CrossRef Cimtay, Y., Ekmekcioglu, E., Caglar-Ozhan, S.: Cross-subject multimodal emotion recognition based on hybrid fusion. IEEE Access 8, 168865–168878 (2020)CrossRef
11.
Zurück zum Zitat Mai, S., Zeng, Y., Zheng, S., Hu, H.: Hybrid contrastive learning of tri-modal representation for multimodal sentiment analysis. IEEE Trans. Affect. Comput. 14, 2276–2289 (2022) Mai, S., Zeng, Y., Zheng, S., Hu, H.: Hybrid contrastive learning of tri-modal representation for multimodal sentiment analysis. IEEE Trans. Affect. Comput. 14, 2276–2289 (2022)
12.
Zurück zum Zitat Chen, D., Su, W., Wu, P., Hua, B.: Joint multimodal sentiment analysis based on information relevance. Inf. Process. Manage. 60(2), 103193 (2023)CrossRef Chen, D., Su, W., Wu, P., Hua, B.: Joint multimodal sentiment analysis based on information relevance. Inf. Process. Manage. 60(2), 103193 (2023)CrossRef
13.
Zurück zum Zitat Han, W., Chen, H., Gelbukh, A., Zadeh, A., Morency, L.P., Poria, S.: Bi-bimodal modality fusion for correlation-controlled multimodal sentiment analysis. In: Proceedings of the 2021 International Conference on Multimodal Interaction, pp. 6–15 (2021) Han, W., Chen, H., Gelbukh, A., Zadeh, A., Morency, L.P., Poria, S.: Bi-bimodal modality fusion for correlation-controlled multimodal sentiment analysis. In: Proceedings of the 2021 International Conference on Multimodal Interaction, pp. 6–15 (2021)
14.
Zurück zum Zitat Zhu, T., Li, L., Yang, J., Zhao, S., Liu, H., Qian, J.: Multimodal sentiment analysis with image-text interaction network. IEEE Trans. Multimedia 25, 3375–3385 (2022) Zhu, T., Li, L., Yang, J., Zhao, S., Liu, H., Qian, J.: Multimodal sentiment analysis with image-text interaction network. IEEE Trans. Multimedia 25, 3375–3385 (2022)
15.
Zurück zum Zitat Caschera, M.C., Grifoni, P., Ferri, F.: Emotion classification from speech and text in videos using a multimodal approach. Multimodal Technol. Interact. 6(4), 28 (2022)CrossRef Caschera, M.C., Grifoni, P., Ferri, F.: Emotion classification from speech and text in videos using a multimodal approach. Multimodal Technol. Interact. 6(4), 28 (2022)CrossRef
16.
Zurück zum Zitat Cai, C., He, Y., Sun, L., Lian, Z., Liu, B., Tao, J., ... Wang, K.: Multimodal sentiment analysis based on recurrent neural network and multimodal attention. In: Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge, pp. 61–67 (2021) Cai, C., He, Y., Sun, L., Lian, Z., Liu, B., Tao, J., ... Wang, K.: Multimodal sentiment analysis based on recurrent neural network and multimodal attention. In: Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge, pp. 61–67 (2021)
17.
Zurück zum Zitat Hazarika, D., Li, Y., Cheng, B., Zhao, S., Zimmermann, R., Poria, S.: Analyzing modality robustness in multimodal sentiment analysis (2022). arXiv preprint arXiv:2205.15465 Hazarika, D., Li, Y., Cheng, B., Zhao, S., Zimmermann, R., Poria, S.: Analyzing modality robustness in multimodal sentiment analysis (2022). arXiv preprint arXiv:​2205.​15465
18.
Zurück zum Zitat Jiang, D., Wei, R., Liu, H., Wen, J., Tu, G., Zheng, L., Cambria, E.: A multitask learning framework for multimodal sentiment analysis. In: 2021 International Conference on Data Mining Workshops (ICDMW), pp. 151–157. IEEE (2021) Jiang, D., Wei, R., Liu, H., Wen, J., Tu, G., Zheng, L., Cambria, E.: A multitask learning framework for multimodal sentiment analysis. In: 2021 International Conference on Data Mining Workshops (ICDMW), pp. 151–157. IEEE (2021)
19.
Zurück zum Zitat Heredia, J., Lopes-Silva, E., Cardinale, Y., Diaz-Amado, J., Dongo, I., Graterol, W., Aguilera, A.: Adaptive multimodal emotion detection architecture for social robots. IEEE Access 10, 20727–20744 (2022)CrossRef Heredia, J., Lopes-Silva, E., Cardinale, Y., Diaz-Amado, J., Dongo, I., Graterol, W., Aguilera, A.: Adaptive multimodal emotion detection architecture for social robots. IEEE Access 10, 20727–20744 (2022)CrossRef
20.
Zurück zum Zitat Subramanian, G., Cholendiran, N., Prathyusha, K., Balasubramanain, N., Aravinth, J.: Multimodal emotion recognition using different fusion techniques. In: 2021 Seventh International Conference on Bio Signals, Images, and Instrumentation (ICBSII), pp. 1–6. IEEE (2021) Subramanian, G., Cholendiran, N., Prathyusha, K., Balasubramanain, N., Aravinth, J.: Multimodal emotion recognition using different fusion techniques. In: 2021 Seventh International Conference on Bio Signals, Images, and Instrumentation (ICBSII), pp. 1–6. IEEE (2021)
21.
Zurück zum Zitat Kumar, P., Khokher, V., Gupta, Y., Raman, B.: Hybrid fusion based approach for multimodal emotion recognition with insufficient labeled data. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 314–318. IEEE (2021) Kumar, P., Khokher, V., Gupta, Y., Raman, B.: Hybrid fusion based approach for multimodal emotion recognition with insufficient labeled data. In: 2021 IEEE International Conference on Image Processing (ICIP), pp. 314–318. IEEE (2021)
22.
Zurück zum Zitat Padi, S., Sadjadi, S.O., Manocha, D., Sriram, R.D.: Multimodal emotion recognition using transfer learning from speaker recognition and bert-based models (2022). arXiv preprint arXiv:2202.08974 Padi, S., Sadjadi, S.O., Manocha, D., Sriram, R.D.: Multimodal emotion recognition using transfer learning from speaker recognition and bert-based models (2022). arXiv preprint arXiv:​2202.​08974
23.
Zurück zum Zitat Xu, N., Mao, W., Wei, P., Zeng, D.: MDA: multimodal data augmentation framework for boosting performance on sentiment/emotion classification tasks. IEEE Intell. Syst. 36(6), 3–12 (2020)CrossRef Xu, N., Mao, W., Wei, P., Zeng, D.: MDA: multimodal data augmentation framework for boosting performance on sentiment/emotion classification tasks. IEEE Intell. Syst. 36(6), 3–12 (2020)CrossRef
24.
Zurück zum Zitat Makiuchi, M.R., Uto, K., Shinoda, K.: Multimodal emotion recognition with high-level speech and text features. In: 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 350–357. IEEE (2021) Makiuchi, M.R., Uto, K., Shinoda, K.: Multimodal emotion recognition with high-level speech and text features. In: 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 350–357. IEEE (2021)
25.
Zurück zum Zitat Zhang, Y., Cheng, C., Zhang, Y.: Multimodal emotion recognition using a hierarchical fusion convolutional neural network. IEEE access 9, 7943–7951 (2021)CrossRef Zhang, Y., Cheng, C., Zhang, Y.: Multimodal emotion recognition using a hierarchical fusion convolutional neural network. IEEE access 9, 7943–7951 (2021)CrossRef
26.
Zurück zum Zitat Pandey, S., Sharma, S., Wazir, S.: Mental healthcare chatbot based on natural language processing and deep learning approaches: Ted the therapist. Int. J.Inform. Technol. 1–10 (2022) Pandey, S., Sharma, S., Wazir, S.: Mental healthcare chatbot based on natural language processing and deep learning approaches: Ted the therapist. Int. J.Inform. Technol. 1–10 (2022)
27.
Zurück zum Zitat Sharma, S., Aggarwal, K., Papneja, P., Singh, S.: Extraction, summarization and sentiment analysis of trending topics on twitter. In: 2015 Eighth International Conference on Contemporary Computing (IC3), pp. 295–301. IEEE (2015) Sharma, S., Aggarwal, K., Papneja, P., Singh, S.: Extraction, summarization and sentiment analysis of trending topics on twitter. In: 2015 Eighth International Conference on Contemporary Computing (IC3), pp. 295–301. IEEE (2015)
Metadaten
Titel
Multimodal Sentiment Analysis and Multimodal Emotion Analysis: A Review
verfasst von
Soumya Sharma
Srishti Sharma
Deepak Gupta
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-97-0892-5_29