Skip to main content

2021 | OriginalPaper | Buchkapitel

A Question-Answering System that Can Count

verfasst von : Abbas Saliimi Lokman, Mohamed Ariff Ameedeen, Ngahzaifa Ab. Ghani

Erschienen in: Computational Science and Technology

Verlag: Springer Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper proposes a conceptual architectural design of Question-Answering (QA) system that can solve “counting” problem. Counting problem is the inability of QA system to produce numerical answer based on retrieved rationale (in text passage) containing list of items. For example, consider “How many items are on sale?” as question and “Currently shampoo, soap and conditioner are on sale” as retrieved rationale from text passage. Normally, system will produce “shampoo, soap and conditioner” as an answer while the ground truth answer is “three”. In other words, system is simply unable to perform the counting process needed in order to correctly answer such questions. To solve this problem, QA system architecture with following components is proposed: (1) A classifier to determine if given question requires a counting answer, (2) A classifier to determine if current system’s answer is not numeric, and (3) A counting method to produce numerical answer based on given rationale. Despite looking like a whole system, the proposed architecture is actually a modular system whereby each component can operate independently (allowing each component to be separately implemented by other systems). In essence, this paper intents to demonstrate a general idea of how the defined problem can be solved using a modular system, that hopefully also opens up more flexible enhancements in the future.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Banerjee P, Pal KK, Mitra A, Baral C (2019) Careful selection of knowledge to solve open book question answering. arXiv preprint arXiv:1907.10738 Banerjee P, Pal KK, Mitra A, Baral C (2019) Careful selection of knowledge to solve open book question answering. arXiv preprint arXiv:​1907.​10738
2.
Zurück zum Zitat Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S (2020) Language models are few-shot learners. arXiv preprint arXiv:2005.14165 Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S (2020) Language models are few-shot learners. arXiv preprint arXiv:​2005.​14165
3.
Zurück zum Zitat Choi E, He H, Iyyer M, Yatskar M, Yih WT, Choi Y, Liang P, Zettlemoyer L (2018) Quac: question answering in context. arXiv preprint arXiv:1808.07036 Choi E, He H, Iyyer M, Yatskar M, Yih WT, Choi Y, Liang P, Zettlemoyer L (2018) Quac: question answering in context. arXiv preprint arXiv:​1808.​07036
4.
Zurück zum Zitat Devlin J, Chang MW, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 Devlin J, Chang MW, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:​1810.​04805
5.
Zurück zum Zitat Firth JR (1957) A synopsis of linguistic theory, 1930–1955. Studies in linguistic analysis Firth JR (1957) A synopsis of linguistic theory, 1930–1955. Studies in linguistic analysis
6.
Zurück zum Zitat Godbole A, Kavarthapu D, Das R, Gong Z, Singhal A, Zamani H, Yu M, Gao T, Guo X, Zaheer M, McCallum A (2019) Multi-step entity-centric information retrieval for multi-hop question answering. arXiv preprint arXiv:1909.07598 Godbole A, Kavarthapu D, Das R, Gong Z, Singhal A, Zamani H, Yu M, Gao T, Guo X, Zaheer M, McCallum A (2019) Multi-step entity-centric information retrieval for multi-hop question answering. arXiv preprint arXiv:​1909.​07598
7.
Zurück zum Zitat Joshi M, Chen D, Liu Y, Weld DS, Zettlemoyer L, Levy O (2020) Spanbert: improving pre-training by representing and predicting spans. Trans Assoc Comput Linguistics 8:64–77CrossRef Joshi M, Chen D, Liu Y, Weld DS, Zettlemoyer L, Levy O (2020) Spanbert: improving pre-training by representing and predicting spans. Trans Assoc Comput Linguistics 8:64–77CrossRef
8.
Zurück zum Zitat Ju Y, Zhao F, Chen S, Zheng B, Yang X, Liu Y (2019) Technical report on conversational question answering. arXiv preprint arXiv:1909.10772 Ju Y, Zhao F, Chen S, Zheng B, Yang X, Liu Y (2019) Technical report on conversational question answering. arXiv preprint arXiv:​1909.​10772
9.
Zurück zum Zitat Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2019) Albert: a lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2019) Albert: a lite bert for self-supervised learning of language representations. arXiv preprint arXiv:​1909.​11942
10.
Zurück zum Zitat Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, Kang J (2020) BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4):1234–40 Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, Kang J (2020) BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36(4):1234–40
11.
Zurück zum Zitat Liu W, Zhou P, Zhao Z, Wang Z, Ju Q, Deng H, Wang P (2020) K-BERT: enabling Language Representation with Knowledge Graph. arXiv preprint arXiv:1909.07606 Liu W, Zhou P, Zhao Z, Wang Z, Ju Q, Deng H, Wang P (2020) K-BERT: enabling Language Representation with Knowledge Graph. arXiv preprint arXiv:​1909.​07606
12.
Zurück zum Zitat Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:​1907.​11692
13.
Zurück zum Zitat Lokman AS, Ameedeen MA (2019) Modern chatbot systems: a technical review. In: Arai K, Bhatia R, Kapoor S (eds) Proceedings of the Future Technologies Conference (FTC) 2018. FTC 2018. Advances in Intelligent Systems and Computing. Springer, Cham, vol 881, pp 1012–1023 Lokman AS, Ameedeen MA (2019) Modern chatbot systems: a technical review. In: Arai K, Bhatia R, Kapoor S (eds) Proceedings of the Future Technologies Conference (FTC) 2018. FTC 2018. Advances in Intelligent Systems and Computing. Springer, Cham, vol 881, pp 1012–1023
14.
Zurück zum Zitat Lokman AS, Ameedeen MA, Ghani NA (2020) A conceptual IR chatbot framework with automated keywords-based vector representation generation. IOP Conf Ser Mater Sci Eng 769(1):012020 IOP PublishingCrossRef Lokman AS, Ameedeen MA, Ghani NA (2020) A conceptual IR chatbot framework with automated keywords-based vector representation generation. IOP Conf Ser Mater Sci Eng 769(1):012020 IOP PublishingCrossRef
15.
Zurück zum Zitat Lokman AS (2011) Chatbot development in data representation for diabetes education. MCS Thesis, Universiti Malaysia Pahang, Pahang, Malaysia Lokman AS (2011) Chatbot development in data representation for diabetes education. MCS Thesis, Universiti Malaysia Pahang, Pahang, Malaysia
16.
Zurück zum Zitat Ong MIU, Ameedeen MA, Azmi ZR, Kamarudin IE (2018) Systematic literature review: 5 years trend in the field of software engineering. Adv Sci Lett 24(10):7278–7283CrossRef Ong MIU, Ameedeen MA, Azmi ZR, Kamarudin IE (2018) Systematic literature review: 5 years trend in the field of software engineering. Adv Sci Lett 24(10):7278–7283CrossRef
17.
Zurück zum Zitat Ong MIU, Ameedeen MA, Kamarudin IE (2018) Meta-requirement method towards analyzing completeness of requirements specification. In: Arai K, Bhatia R, Kapoor S (eds) Proceedings of the Future Technologies Conference (FTC) 2018. FTC 2018. Advances in intelligent systems and computing. Springer, Cham, vol 881, pp 444–454 Ong MIU, Ameedeen MA, Kamarudin IE (2018) Meta-requirement method towards analyzing completeness of requirements specification. In: Arai K, Bhatia R, Kapoor S (eds) Proceedings of the Future Technologies Conference (FTC) 2018. FTC 2018. Advances in intelligent systems and computing. Springer, Cham, vol 881, pp 444–454
18.
Zurück zum Zitat Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. arXiv preprint arXiv:1802.05365 Peters ME, Neumann M, Iyyer M, Gardner M, Clark C, Lee K, Zettlemoyer L (2018) Deep contextualized word representations. arXiv preprint arXiv:​1802.​05365
19.
20.
Zurück zum Zitat Reddy S, Chen D, Manning CD (2019) CoQA: a conversational question answering challenge. Trans Assoc Comput Linguistics 7:249–266CrossRef Reddy S, Chen D, Manning CD (2019) CoQA: a conversational question answering challenge. Trans Assoc Comput Linguistics 7:249–266CrossRef
21.
Zurück zum Zitat Wang A, Singh A, Michael J, Hill F, Levy O, Bowman SR (2018) Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461 Wang A, Singh A, Michael J, Hill F, Levy O, Bowman SR (2018) Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:​1804.​07461
22.
Zurück zum Zitat Wang Z, Ng P, Ma X, Nallapati R, Xiang B (2019) Multi-passage bert: a globally normalized bert model for open-domain question answering. arXiv preprint arXiv:1908.08167 Wang Z, Ng P, Ma X, Nallapati R, Xiang B (2019) Multi-passage bert: a globally normalized bert model for open-domain question answering. arXiv preprint arXiv:​1908.​08167
23.
Zurück zum Zitat Wen A, Elwazir MY, Moon S, Fan J (2019) Adapting and evaluating a deep learning language model for clinical why-question answering. arXiv preprint arXiv:1911.05604 Wen A, Elwazir MY, Moon S, Fan J (2019) Adapting and evaluating a deep learning language model for clinical why-question answering. arXiv preprint arXiv:​1911.​05604
24.
Zurück zum Zitat Yang W, Xie Y, Lin A, Li X, Tan L, Xiong K, Li M, Lin J (2019) End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:1902.01718 Yang W, Xie Y, Lin A, Li X, Tan L, Xiong K, Li M, Lin J (2019) End-to-end open-domain question answering with bertserini. arXiv preprint arXiv:​1902.​01718
25.
Zurück zum Zitat Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV (2019) XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237 Yang Z, Dai Z, Yang Y, Carbonell J, Salakhutdinov R, Le QV (2019) XLNet: generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:​1906.​08237
Metadaten
Titel
A Question-Answering System that Can Count
verfasst von
Abbas Saliimi Lokman
Mohamed Ariff Ameedeen
Ngahzaifa Ab. Ghani
Copyright-Jahr
2021
Verlag
Springer Singapore
DOI
https://doi.org/10.1007/978-981-33-4069-5_6

Premium Partner