Skip to main content

2021 | OriginalPaper | Buchkapitel

EviDR: Evidence-Emphasized Discrete Reasoning for Reasoning Machine Reading Comprehension

verfasst von : Yongwei Zhou, Junwei Bao, Haipeng Sun, Jiahui Liang, Youzheng Wu, Xiaodong He, Bowen Zhou, Tiejun Zhao

Erschienen in: Natural Language Processing and Chinese Computing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Reasoning machine reading comprehension (R-MRC) aims to answer complex questions that require discrete reasoning based on text. To support discrete reasoning, evidence, typically the concise textual fragments that describe question-related facts, including topic entities and attribute values, are crucial clues from question to answer. However, previous end-to-end methods that achieve state-of-the-art performance rarely solve the problem by paying enough emphasis on the modeling of evidence, missing the opportunity to further improve the model’s reasoning ability for R-MRC. To alleviate the above issue, in this paper, we propose an Evidence-emphasized Discrete Reasoning approach (EviDR), in which sentence and clause level evidence is first detected based on distant supervision, and then used to drive a reasoning module implemented with a relational heterogeneous graph convolutional network to derive answers. Extensive experiments are conducted on DROP (discrete reasoning over paragraphs) dataset, and the results demonstrate the effectiveness of our proposed approach. In addition, qualitative analysis verifies the capability of the proposed evidence-emphasized discrete reasoning for R-MRC (Code is released at https://​github.​com/​JD-AI-Research-NLP/​EviDR).

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Andor, D., He, L., Lee, K., Pitler, E.: Giving BERT a calculator: Finding operations and arguments with reading comprehension. In: EMNLP-IJCNLP (2019) Andor, D., He, L., Lee, K., Pitler, E.: Giving BERT a calculator: Finding operations and arguments with reading comprehension. In: EMNLP-IJCNLP (2019)
2.
Zurück zum Zitat Chen, K., et al.: Question directed graph attention network for numerical reasoning over text. In: EMNLP (2020) Chen, K., et al.: Question directed graph attention network for numerical reasoning over text. In: EMNLP (2020)
3.
Zurück zum Zitat Chen, X., Liang, C., Yu, A.W., Zhou, D., Song, D., Le, Q.V.: Neural symbolic reader: scalable integration of distributed and symbolic representations for reading comprehension. In: ICLR (2020) Chen, X., Liang, C., Yu, A.W., Zhou, D., Song, D., Le, Q.V.: Neural symbolic reader: scalable integration of distributed and symbolic representations for reading comprehension. In: ICLR (2020)
4.
Zurück zum Zitat Choi, E., et al.: QuAC: question answering in context. In: EMNLP (2018) Choi, E., et al.: QuAC: question answering in context. In: EMNLP (2018)
5.
Zurück zum Zitat Clark, C., Gardner, M.: Simple and effective multi-paragraph reading comprehension. In: ACL (2018) Clark, C., Gardner, M.: Simple and effective multi-paragraph reading comprehension. In: ACL (2018)
6.
Zurück zum Zitat Clark, K., Luong, M., Le, Q.V., Manning, C.D.: ELECTRA: pre-training text encoders as discriminators rather than generators. In: ICLR (2020) Clark, K., Luong, M., Le, Q.V., Manning, C.D.: ELECTRA: pre-training text encoders as discriminators rather than generators. In: ICLR (2020)
7.
Zurück zum Zitat Dua, D., Wang, Y., Dasigi, P., Stanovsky, G., Singh, S., Gardner, M.: DROP: a reading comprehension benchmark requiring discrete reasoning over paragraphs. In: NAACL (2019) Dua, D., Wang, Y., Dasigi, P., Stanovsky, G., Singh, S., Gardner, M.: DROP: a reading comprehension benchmark requiring discrete reasoning over paragraphs. In: NAACL (2019)
8.
Zurück zum Zitat Geva, M., Gupta, A., Berant, J.: Injecting numerical reasoning skills into language models. In: ACL (2020) Geva, M., Gupta, A., Berant, J.: Injecting numerical reasoning skills into language models. In: ACL (2020)
9.
Zurück zum Zitat Glass, M., et al.: Span selection pre-training for question answering. In: ACL (2020) Glass, M., et al.: Span selection pre-training for question answering. In: ACL (2020)
10.
Zurück zum Zitat Gupta, N., Lin, K., Roth, D., Singh, S., Gardner, M.: Neural module networks for reasoning over text. In: ICLR (2020) Gupta, N., Lin, K., Roth, D., Singh, S., Gardner, M.: Neural module networks for reasoning over text. In: ICLR (2020)
11.
Zurück zum Zitat He, W., et al.: DureaderDuReader: a Chinese machine reading comprehension dataset from real-world applications. In: MRQA@ACL (2018) He, W., et al.: DureaderDuReader: a Chinese machine reading comprehension dataset from real-world applications. In: MRQA@ACL (2018)
12.
Zurück zum Zitat Hu, M., Peng, Y., Huang, Z., Li, D.: A multi-type multi-span network for reading comprehension that requires discrete reasoning. In: EMNLP-IJCNLP (2019) Hu, M., Peng, Y., Huang, Z., Li, D.: A multi-type multi-span network for reading comprehension that requires discrete reasoning. In: EMNLP-IJCNLP (2019)
13.
Zurück zum Zitat Krishnamurthy, J., Dasigi, P., Gardner, M.: Neural semantic parsing with type constraints for semi-structured tables. In: EMNLP (2017) Krishnamurthy, J., Dasigi, P., Gardner, M.: Neural semantic parsing with type constraints for semi-structured tables. In: EMNLP (2017)
14.
Zurück zum Zitat Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: Large-scale ReAding comprehension dataset from examinations. In: EMNLP (2017) Lai, G., Xie, Q., Liu, H., Yang, Y., Hovy, E.: RACE: Large-scale ReAding comprehension dataset from examinations. In: EMNLP (2017)
15.
Zurück zum Zitat Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. In: ICLR (2020) Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. In: ICLR (2020)
16.
Zurück zum Zitat Li, S., et al.: Hopretriever: retrieve hops over wikipedia to answer complex questions. In: AAAI (2021) Li, S., et al.: Hopretriever: retrieve hops over wikipedia to answer complex questions. In: AAAI (2021)
18.
Zurück zum Zitat Nguyen, T., et al.: MS MARCO: a human generated machine reading comprehension dataset. In: CoCo@ NIPS (2016) Nguyen, T., et al.: MS MARCO: a human generated machine reading comprehension dataset. In: CoCo@ NIPS (2016)
19.
Zurück zum Zitat Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: EMNLP (2016) Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: EMNLP (2016)
20.
Zurück zum Zitat Ran, Q., Lin, Y., Li, P., Zhou, J., Liu, Z.: NumNet: machine reading comprehension with numerical reasoning. In: EMNLP-IJCNLP (2019) Ran, Q., Lin, Y., Li, P., Zhou, J., Liu, Z.: NumNet: machine reading comprehension with numerical reasoning. In: EMNLP-IJCNLP (2019)
21.
Zurück zum Zitat Reddy, S., Chen, D., Manning, C.D.: CoQA: a conversational question answering challenge. In: TACL (2019) Reddy, S., Chen, D., Manning, C.D.: CoQA: a conversational question answering challenge. In: TACL (2019)
22.
Zurück zum Zitat Segal, E., Efrat, A., Shoham, M., Globerson, A., Berant, J.: A simple and effective model for answering multi-span questions. In: EMNLP (2020) Segal, E., Efrat, A., Shoham, M., Globerson, A., Berant, J.: A simple and effective model for answering multi-span questions. In: EMNLP (2020)
23.
Zurück zum Zitat Talmor, A., Herzig, J., Lourie, N., Berant, J.: CommonsenseQA: a question answering challenge targeting commonsense knowledge. In: NAACL (2019) Talmor, A., Herzig, J., Lourie, N., Berant, J.: CommonsenseQA: a question answering challenge targeting commonsense knowledge. In: NAACL (2019)
24.
Zurück zum Zitat Yang, Z., et al.: HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: EMNLP (2018) Yang, Z., et al.: HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: EMNLP (2018)
Metadaten
Titel
EviDR: Evidence-Emphasized Discrete Reasoning for Reasoning Machine Reading Comprehension
verfasst von
Yongwei Zhou
Junwei Bao
Haipeng Sun
Jiahui Liang
Youzheng Wu
Xiaodong He
Bowen Zhou
Tiejun Zhao
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-88480-2_35