Skip to main content
Top

2020 | OriginalPaper | Chapter

Document-Level Event Subject Pair Recognition

Authors : Zhenyu Hu, Ming Liu, Yin Wu, Jiexin Xu, Bing Qin, JinLong Li

Published in: Natural Language Processing and Chinese Computing

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In recent years, financial events in the stock market have increased dramatically. Extracting valuable information automatically from massive financial documents can provide effective support for the analysis of financial events. This paper just proposes an end-to-end document-level subject pair recognition method. It aims to recognize the subject pair, i.e. the subject and the object of an event. Given one document and the predefined event type set, this method will output all the corresponding subject pairs related to each event type. Subject pair recognition is certainly a document-level extraction task since it needs to scan the entire document to output desired subject pairs. This paper constructs a global document-level vector based on sentence-level vectors which are encoded from BERT. The global document-level vector aims to cover the information carried by the entire document. It is utilized to guide the extraction process conducted sentence by sentence. After considering global information, our method obtains superior experimental results.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Surdeanu, M., Harabagiu, S.: Infrastructure for open-domain information extraction. In: Proceedings of the Human Language Technology, pp. 325–330 (2002) Surdeanu, M., Harabagiu, S.: Infrastructure for open-domain information extraction. In: Proceedings of the Human Language Technology, pp. 325–330 (2002)
2.
go back to reference Chieu, H.L., Ng, H.T.: A maximum entropy approach to information extraction from semi-structured and free text. In: Proceedings of the 18th National Conference on Artificial Intelligence, pp. 786–791 (2002) Chieu, H.L., Ng, H.T.: A maximum entropy approach to information extraction from semi-structured and free text. In: Proceedings of the 18th National Conference on Artificial Intelligence, pp. 786–791 (2002)
3.
go back to reference Ahn, D: The stages of event extraction. In: Proceedings of the Workshop on Annotations and Reasoning About Time and Events, pp. 1–8 (2006) Ahn, D: The stages of event extraction. In: Proceedings of the Workshop on Annotations and Reasoning About Time and Events, pp. 1–8 (2006)
4.
go back to reference Chen, Y., Xu, L., Liu, K., et al.: Event extraction via dynamic multi-pooling convolutional neural networks. In: Proceedings of the 53rd Association for Computational Linguistics, pp. 167–176 (2015) Chen, Y., Xu, L., Liu, K., et al.: Event extraction via dynamic multi-pooling convolutional neural networks. In: Proceedings of the 53rd Association for Computational Linguistics, pp. 167–176 (2015)
5.
go back to reference Nguyen, T.H., Grishman, R.: Event detection and domain adaptation with convolutional neural networks. In: Proceedings of the 53rd Association for Computational Linguistics, pp. 365–371 (2015) Nguyen, T.H., Grishman, R.: Event detection and domain adaptation with convolutional neural networks. In: Proceedings of the 53rd Association for Computational Linguistics, pp. 365–371 (2015)
6.
go back to reference Feng, X., Huang, L., Tang, D., et al.: A language independent neural network for event detection. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 66–71 (2016) Feng, X., Huang, L., Tang, D., et al.: A language independent neural network for event detection. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 66–71 (2016)
7.
go back to reference Zheng, S., Cao, W., Xu, W., et al.: Doc2EDAG: an end-to-end document-level framework for chinese financial event extraction. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 337–346 (2019) Zheng, S., Cao, W., Xu, W., et al.: Doc2EDAG: an end-to-end document-level framework for chinese financial event extraction. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 337–346 (2019)
8.
go back to reference Devlin, J., Chang, M., Lee, K., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 4171–4186 (2019) Devlin, J., Chang, M., Lee, K., et al.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 4171–4186 (2019)
9.
go back to reference Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 6000–6010 (2017) Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 6000–6010 (2017)
10.
go back to reference Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746–1751 (2014) Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pp. 1746–1751 (2014)
11.
go back to reference Lafferty, J., McCallum, A., Pereira, F., et al.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of the 18th International Conference on Machine Learning, pp. 282–289 (2001) Lafferty, J., McCallum, A., Pereira, F., et al.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of the 18th International Conference on Machine Learning, pp. 282–289 (2001)
12.
go back to reference Lample, G., Ballesteros, M., Subramanian, S., et al.: Neural architectures for named entity recognition. In: Proceedings of the North American Chapter of the Association for Computational Linguistics, pp. 260–270 (2016) Lample, G., Ballesteros, M., Subramanian, S., et al.: Neural architectures for named entity recognition. In: Proceedings of the North American Chapter of the Association for Computational Linguistics, pp. 260–270 (2016)
Metadata
Title
Document-Level Event Subject Pair Recognition
Authors
Zhenyu Hu
Ming Liu
Yin Wu
Jiexin Xu
Bing Qin
JinLong Li
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-60450-9_23

Premium Partner