Skip to main content
Top
Published in: Neural Processing Letters 1/2022

06-09-2021

SynSeq4ED: A Novel Event-Aware Text Representation Learning for Event Detection

Author: Tham Vo

Published in: Neural Processing Letters | Issue 1/2022

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Event detection (ED) is considered as an important task in natural language processing (NLP) which effectively supports to specify instances of multi event types which are mentioned in text. Recent models adopt advanced neural network architectures, such as long short-term memory (LSTM), graph convolutional network (GCN), etc. to capture the sequential and syntactical representations of texts for leveraging the performance of ED. However, recent neural network-based models neglect to sufficiently perverse both sequential comprehensive meanings as well as syntactical co-referencing relationships between words in the sentences. In this paper, we proposed a novel integration of GCN-based textual syntactical encoder and pre-trained BERT sequential embedding with event-aware masked language mechanism, called SynSeq4ED. In our SynSeq4ED model, we formally present a joint text embedding framework which enable to effectively learn the deep semantic representations of event triggers and arguments by introducing a combination of integrated pre-trained BERT with event-aware masked language strategy and GCN-based syntactical co-referencing text encoding mechanism. The achieved text representations by SynSeq4ED model are then used to improve the performance of multiple tasks in ED, including multiple event detection (MED), few-shot learning event detection (FSLED). Extensive experiments in benchmark datasets demonstrate the effectiveness of our proposed SynSeq4ED model in comparing with recent state-of-the-art baselines.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Parcheta Z, Sanchis-Trilles G, Casacuberta F, Rendahl R (2020) Combining embeddings of input data for text classification. Neural Process Lett 1–29 Parcheta Z, Sanchis-Trilles G, Casacuberta F, Rendahl R (2020) Combining embeddings of input data for text classification. Neural Process Lett 1–29
2.
go back to reference Label-embedding bi-directional attentive model for multi-label text classification. Neural Process Lett 53(1): 375–389 (2021) Label-embedding bi-directional attentive model for multi-label text classification. Neural Process Lett 53(1): 375–389 (2021)
3.
go back to reference Venugopal D, Chen C, Gogate V, Ng V (2014) Relieving the computational bottleneck: Joint inference for event extraction with high-dimensional features. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Venugopal D, Chen C, Gogate V, Ng V (2014) Relieving the computational bottleneck: Joint inference for event extraction with high-dimensional features. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).
4.
go back to reference Yang B, Mitchell T (2016) Joint extraction of events and entities within a document context. In Proceedings of NAACL-HLT Yang B, Mitchell T (2016) Joint extraction of events and entities within a document context. In Proceedings of NAACL-HLT
5.
go back to reference Ding C, Hu Z, Karmoshi S, Zhu M (2017) A novel two-stage learning pipeline for deep neural networks. Neural Process Lett 46(1):159–169CrossRef Ding C, Hu Z, Karmoshi S, Zhu M (2017) A novel two-stage learning pipeline for deep neural networks. Neural Process Lett 46(1):159–169CrossRef
6.
go back to reference Nguyen TH, Cho K, Grishman R (2016) Joint event extraction via recurrent neural networks. In In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Nguyen TH, Cho K, Grishman R (2016) Joint event extraction via recurrent neural networks. In In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
7.
go back to reference Ghaeini R, Fern X, Huang L, Tadepalli P (2016) Event nugget detection with forward-backward recurrent neural networks. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics Ghaeini R, Fern X, Huang L, Tadepalli P (2016) Event nugget detection with forward-backward recurrent neural networks. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics
8.
go back to reference Chen Y, Liu S, He S, Liu K, Zhao J (2016) Event extraction via bidirectional long short-term memory tensor neural networks. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data Chen Y, Liu S, He S, Liu K, Zhao J (2016) Event extraction via bidirectional long short-term memory tensor neural networks. In Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data
9.
go back to reference Nguyen TH, Grishman R (2015) Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing Nguyen TH, Grishman R (2015) Event detection and domain adaptation with convolutional neural networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
10.
go back to reference Chen Y, Xu L, Liu K, Zeng D, Zhao J (2015) Event extraction via dynamic multi-pooling convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing Chen Y, Xu L, Liu K, Zeng D, Zhao J (2015) Event extraction via dynamic multi-pooling convolutional neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing
11.
go back to reference Nguyen TH, Grishman R (2016) Modeling skip-grams for event detection with convolutional neural networks. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing Nguyen TH, Grishman R (2016) Modeling skip-grams for event detection with convolutional neural networks. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
12.
go back to reference Nguyen T, Grishman R (2018) Graph convolutional networks with argument-aware pooling for event detection. In: Proceedings of the AAAI Conference on Artificial Intelligence Nguyen T, Grishman R (2018) Graph convolutional networks with argument-aware pooling for event detection. In: Proceedings of the AAAI Conference on Artificial Intelligence
13.
go back to reference Liu X, Luo Z, Huang HY (2018) Jointly Multiple Events Extraction via Attention-based Graph Information Aggregation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing Liu X, Luo Z, Huang HY (2018) Jointly Multiple Events Extraction via Attention-based Graph Information Aggregation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
14.
go back to reference Lai VD, Nguyen TH, Dernoncourt F (2020) Extensively matching for few-shot learning event detection. In: Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events Lai VD, Nguyen TH, Dernoncourt F (2020) Extensively matching for few-shot learning event detection. In: Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events
15.
go back to reference Devlin J, Chang MW, Lee K, Toutanova K (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Devlin J, Chang MW, Lee K, Toutanova K (2019) Bert: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
16.
go back to reference Liu X, You X, Zhang X, Wu J, Lv P (2020) Tensor graph convolutional networks for text classification. In: Proceedings of the AAAI Conference on Artificial Intelligence Liu X, You X, Zhang X, Wu J, Lv P (2020) Tensor graph convolutional networks for text classification. In: Proceedings of the AAAI Conference on Artificial Intelligence
17.
go back to reference Manning CD, Surdeanu M, Bauer J, Finkel JR, Bethard S, McClosky D (2014) The Stanford CoreNLP natural language processing toolkit. In: Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations Manning CD, Surdeanu M, Bauer J, Finkel JR, Bethard S, McClosky D (2014) The Stanford CoreNLP natural language processing toolkit. In: Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations
18.
go back to reference Xiang W, Wang B (2019) A survey of event extraction from text. IEEE Access 7:173111–173137CrossRef Xiang W, Wang B (2019) A survey of event extraction from text. IEEE Access 7:173111–173137CrossRef
19.
go back to reference Li J, Luong MT, Jurafsky D, Hovy E (2015) When are tree structures necessary for deep learning of representations? In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing Li J, Luong MT, Jurafsky D, Hovy E (2015) When are tree structures necessary for deep learning of representations? In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
20.
go back to reference Liu S, Liu K, He S, Zhao J (2016) A probabilistic soft logic based approach to exploiting latent and global information in event classification. In: Proceedings of the AAAI Conference on Artificial Intelligence. Liu S, Liu K, He S, Zhao J (2016) A probabilistic soft logic based approach to exploiting latent and global information in event classification. In: Proceedings of the AAAI Conference on Artificial Intelligence.
21.
go back to reference Sha L, Qian F, Chang B, Sui Z (2018) Jointly extracting event triggers and arguments by dependency-bridge RNN and tensor-based argument interaction. In: Proceedings of the AAAI Conference on Artificial Intelligence Sha L, Qian F, Chang B, Sui Z (2018) Jointly extracting event triggers and arguments by dependency-bridge RNN and tensor-based argument interaction. In: Proceedings of the AAAI Conference on Artificial Intelligence
22.
go back to reference Zhang W, Ding X, Liu T (2018) Learning target-dependent sentence representations for chinese event detection. In: China Conference on Information Retrieval Zhang W, Ding X, Liu T (2018) Learning target-dependent sentence representations for chinese event detection. In: China Conference on Information Retrieval
23.
go back to reference Tai KS, Socher R, Manning CD (2015) Improved semantic representations from tree-structured long short-term memory networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing,. Tai KS, Socher R, Manning CD (2015) Improved semantic representations from tree-structured long short-term memory networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing,.
24.
go back to reference Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. In: 1st International Conference on Learning Representations (ICRL) Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. In: 1st International Conference on Learning Representations (ICRL)
25.
go back to reference Pennington J, Socher R, Manning CD (2014) Glove: Global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) Pennington J, Socher R, Manning CD (2014) Glove: Global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)
26.
go back to reference Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, ICLR Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, ICLR
27.
go back to reference Mikolov T, Grave É, Bojanowski P, Puhrsch C, Joulin A (2018) Advances in pre-training distributed word representations. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation Mikolov T, Grave É, Bojanowski P, Puhrsch C, Joulin A (2018) Advances in pre-training distributed word representations. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation
Metadata
Title
SynSeq4ED: A Novel Event-Aware Text Representation Learning for Event Detection
Author
Tham Vo
Publication date
06-09-2021
Publisher
Springer US
Published in
Neural Processing Letters / Issue 1/2022
Print ISSN: 1370-4621
Electronic ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-021-10627-2

Other articles of this Issue 1/2022

Neural Processing Letters 1/2022 Go to the issue