Skip to main content
Erschienen in: Journal of Intelligent Information Systems 2/2024

30.11.2023 | Research

A mutually enhanced multi-scale relation-aware graph convolutional network for argument pair extraction

verfasst von: Xiaofei Zhu, Yidan Liu, Zhuo Chen, Xu Chen, Jiafeng Guo, Stefan Dietze

Erschienen in: Journal of Intelligent Information Systems | Ausgabe 2/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Argument pair extraction (APE) is a fine-grained task of argument mining which aims to identify arguments offered by different participants in some discourse and detect interaction relationships between arguments from different participants. In recent years, many research efforts have been devoted to dealing with APE in a multi-task learning framework. Although these approaches have achieved encouraging results, they still face several challenging issues. First, different types of sentence relationships as well as different levels of information exchange among sentences are largely ignored. Second, they solely model interactions between argument pairs either in an explicit or implicit strategy, while neglecting the complementary effect of the two strategies. In this paper, we propose a novel Mutually Enhanced Multi-Scale Relation-Aware Graph Convolutional Network (MMR-GCN) for APE. Specifically, we first design a multi-scale relation-aware graph aggregation module to explicitly model the complex relationships between review and rebuttal passage sentences. In addition, we propose a mutually enhancement transformer module to implicitly and interactively enhance representations of review and rebuttal passage sentences. We experimentally validate MMR-GCN by comparing with the state-of-the-art APE methods. Experimental results show that it considerably outperforms all baseline methods, and the relative performance improvement of MMR-GCN over the best performing baseline MRC-APE in terms of F1 score reaches to 3.48% and 4.43% on the two benchmark datasets, respectively.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Devlin, J., Chang, M.-W., Lee, K., et al. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. https://doi.org/10.18653/v1/N19-1423 Devlin, J., Chang, M.-W., Lee, K., et al. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. https://​doi.​org/​10.​18653/​v1/​N19-1423
Zurück zum Zitat Eger, S., Daxenberger, J., & Gurevych I. (2017). Neural end-to-end learning for computational argumentation mining. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 11–22. https://doi.org/10.18653/v1/P17-1002 Eger, S., Daxenberger, J., & Gurevych I. (2017). Neural end-to-end learning for computational argumentation mining. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pp. 11–22. https://​doi.​org/​10.​18653/​v1/​P17-1002
Zurück zum Zitat Fang Y., Li, X., Ye, R., et al. (2023) Relation-aware graph convolutional networks for multi-relational network alignment. ACM Transactions on Intelligent Systems and Technology 14(2), 37:1–37:23. https://doi.org/10.1145/3579827 Fang Y., Li, X., Ye, R., et al. (2023) Relation-aware graph convolutional networks for multi-relational network alignment. ACM Transactions on Intelligent Systems and Technology 14(2), 37:1–37:23. https://​doi.​org/​10.​1145/​3579827
Zurück zum Zitat Lu, J., Zhongyu W., Jing, L., et al. (2021). Discrete argument representation learning for interactive argument pair identification. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5467–5478. https://doi.org/10.18653/v1/2021.naacl-main.431 Lu, J., Zhongyu W., Jing, L., et al. (2021). Discrete argument representation learning for interactive argument pair identification. In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 5467–5478. https://​doi.​org/​10.​18653/​v1/​2021.​naacl-main.​431
Zurück zum Zitat Kuribayashi, T., Ouchi, H., Inoue, N., et al. (2019). An empirical study of span representations in argumentation structure parsing. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, pp. 4691–4698. https://doi.org/10.18653/v1/P19-1464 Kuribayashi, T., Ouchi, H., Inoue, N., et al. (2019). An empirical study of span representations in argumentation structure parsing. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, pp. 4691–4698. https://​doi.​org/​10.​18653/​v1/​P19-1464
Zurück zum Zitat Lafferty, J. D., McCallum, A., & Pereira, F. C. N. (2001). Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of the Eighteenth International Conference on Machine Learning, pp. 282–289 Lafferty, J. D., McCallum, A., & Pereira, F. C. N. (2001). Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of the Eighteenth International Conference on Machine Learning, pp. 282–289
Zurück zum Zitat Li, J., Wang, L., Zhang, J., et al. (2019). Modeling intra-relation in math word problems with different functional multi-head attentions. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, pp. 6162–6167. https://doi.org/10.18653/v1/P19-1619 Li, J., Wang, L., Zhang, J., et al. (2019). Modeling intra-relation in math word problems with different functional multi-head attentions. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, pp. 6162–6167. https://​doi.​org/​10.​18653/​v1/​P19-1619
Zurück zum Zitat Mancini, E., Ruggeri, F., Galassi, A., et al. (2022). Multimodal argument mining: a case study in political debates. In: Proceedings of the 9th Workshop on Argument Mining, pp. 158–170 Mancini, E., Ruggeri, F., Galassi, A., et al. (2022). Multimodal argument mining: a case study in political debates. In: Proceedings of the 9th Workshop on Argument Mining, pp. 158–170
Zurück zum Zitat Miwa, M., & Sasaki, Y. (2014). Modeling joint entity and relation extraction with table representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1858–1869. https://doi.org/10.3115/v1/D14-1200 Miwa, M., & Sasaki, Y. (2014). Modeling joint entity and relation extraction with table representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1858–1869. https://​doi.​org/​10.​3115/​v1/​D14-1200
Zurück zum Zitat Potash, P., Romanov, A., & Rumshisky, A. (2017). Here’s my point: joint pointer architecture for argument mining. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1364–1373. https://doi.org/10.18653/v1/D17-1143 Potash, P., Romanov, A., & Rumshisky, A. (2017). Here’s my point: joint pointer architecture for argument mining. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1364–1373. https://​doi.​org/​10.​18653/​v1/​D17-1143
Zurück zum Zitat Ratinov, L., & Roth, D. (2009). Design challenges and misconceptions in named entity recognition. In: Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pp. 147–155 Ratinov, L., & Roth, D. (2009). Design challenges and misconceptions in named entity recognition. In: Proceedings of the Thirteenth Conference on Computational Natural Language Learning, pp. 147–155
Zurück zum Zitat Wang, Y., Zhang, J., Ma, J., et al. (2020). Contextualized Emotion Recognition in Conversation as Sequence Tagging. In: Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 186–195 Wang, Y., Zhang, J., Ma, J., et al. (2020). Contextualized Emotion Recognition in Conversation as Sequence Tagging. In: Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 186–195
Zurück zum Zitat Xing, B., & Tsang, I. (2022). DARER: dual-task temporal relational recurrent reasoning network for joint dialog sentiment classification and act recognition. In: Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland: Association for Computational Linguistics, May, 2022, pp. 3611–3621. https://doi.org/10.18653/v1/2022.findings-acl.286 Xing, B., & Tsang, I. (2022). DARER: dual-task temporal relational recurrent reasoning network for joint dialog sentiment classification and act recognition. In: Findings of the Association for Computational Linguistics: ACL 2022, Dublin, Ireland: Association for Computational Linguistics, May, 2022, pp. 3611–3621. https://​doi.​org/​10.​18653/​v1/​2022.​findings-acl.​286
Metadaten
Titel
A mutually enhanced multi-scale relation-aware graph convolutional network for argument pair extraction
verfasst von
Xiaofei Zhu
Yidan Liu
Zhuo Chen
Xu Chen
Jiafeng Guo
Stefan Dietze
Publikationsdatum
30.11.2023
Verlag
Springer US
Erschienen in
Journal of Intelligent Information Systems / Ausgabe 2/2024
Print ISSN: 0925-9902
Elektronische ISSN: 1573-7675
DOI
https://doi.org/10.1007/s10844-023-00826-9

Weitere Artikel der Ausgabe 2/2024

Journal of Intelligent Information Systems 2/2024 Zur Ausgabe

Premium Partner