Skip to main content

2024 | OriginalPaper | Buchkapitel

Question Answering Systems Based on Pre-trained Language Models: Recent Progress

verfasst von : Xudong Luo, Ying Luo, Binxia Yang

Erschienen in: Intelligent Information Processing XII

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Although Pre-trained Language Model (PLM) ChatGPT as a Question-Answering System (QAS) is so successful, it is still necessary to study further the QASs based on PLMs. In this paper, we survey state-of-the-art systems of this kind, identify the issues that current researchers are concerned about, explore various PLM-based methods for addressing them, and compare their pros and cons. We also discuss the datasets used for fine-tuning the corresponding PLMs and evaluating these PLM-based methods. Moreover, we summarise the criteria for evaluating these methods and compare their performance against these criteria. Finally, based on our analysis of the state-of-the-art PLM-based methods for QA, we identify some challenges for future research.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Abacha, A.B., Agichtein, E., Pinter, Y., Demner-Fushman, D.: Overview of the medical question answering task at TREC 2017 LiveQA. In: TREC, pp. 1–12 (2017) Abacha, A.B., Agichtein, E., Pinter, Y., Demner-Fushman, D.: Overview of the medical question answering task at TREC 2017 LiveQA. In: TREC, pp. 1–12 (2017)
2.
Zurück zum Zitat Abacha, A.B., Shivade, C., Demner-Fushman, D.: Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering. In: Proceedings of the 18th BioNLP Workshop and Shared Task, pp. 370–379 (2019) Abacha, A.B., Shivade, C., Demner-Fushman, D.: Overview of the MEDIQA 2019 shared task on textual inference, question entailment and question answering. In: Proceedings of the 18th BioNLP Workshop and Shared Task, pp. 370–379 (2019)
3.
Zurück zum Zitat Alzubi, J.A., Jain, R., Singh, A., Parwekar, P., Gupta, M.: COBERT: COVID-19 question answering system using BERT. Arabian Journal for Science and Engineering, pp. 1–11 (2021) Alzubi, J.A., Jain, R., Singh, A., Parwekar, P., Gupta, M.: COBERT: COVID-19 question answering system using BERT. Arabian Journal for Science and Engineering, pp. 1–11 (2021)
4.
Zurück zum Zitat Artetxe, M., Ruder, S., Yogatama, D.: On the cross-lingual transferability of monolingual representations. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4623–4637 (2020) Artetxe, M., Ruder, S., Yogatama, D.: On the cross-lingual transferability of monolingual representations. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4623–4637 (2020)
5.
Zurück zum Zitat Berant, J., Chou, A., Frostig, R., Liang, P.: Semantic parsing on freebase from question-answer pairs. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1533–1544 (2013) Berant, J., Chou, A., Frostig, R., Liang, P.: Semantic parsing on freebase from question-answer pairs. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1533–1544 (2013)
6.
Zurück zum Zitat Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collaboratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pp. 1247–1250 (2008) Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collaboratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pp. 1247–1250 (2008)
7.
Zurück zum Zitat Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020) Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
8.
Zurück zum Zitat Chada, R., Natarajan, P.: FewshotQA: a simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 6081–6090 (2021) Chada, R., Natarajan, P.: FewshotQA: a simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 6081–6090 (2021)
9.
Zurück zum Zitat Chen, W., Verga, P., de Jong, M., Wieting, J., Cohen, W.: Augmenting pre-trained language models with QA-memory for open-domain question answering. arXiv preprint arXiv:2204.04581 (2022) Chen, W., Verga, P., de Jong, M., Wieting, J., Cohen, W.: Augmenting pre-trained language models with QA-memory for open-domain question answering. arXiv preprint arXiv:​2204.​04581 (2022)
10.
Zurück zum Zitat Choi, E., et al.: QuAc: question answering in context. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2174–2184 (2018) Choi, E., et al.: QuAc: question answering in context. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2174–2184 (2018)
11.
Zurück zum Zitat Choi, S., .: DramaQA: character-centered video story understanding with hierarchical QA. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 1166–1174 (2021) Choi, S., .: DramaQA: character-centered video story understanding with hierarchical QA. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 1166–1174 (2021)
12.
Zurück zum Zitat Dasigi, P., Lo, K., Beltagy, I., Cohan, A., Smith, N.A., Gardner, M.: A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:2105.03011 (2021) Dasigi, P., Lo, K., Beltagy, I., Cohan, A., Smith, N.A., Gardner, M.: A dataset of information-seeking questions and answers anchored in research papers. arXiv preprint arXiv:​2105.​03011 (2021)
13.
Zurück zum Zitat Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186 (2019) Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 17th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186 (2019)
14.
Zurück zum Zitat Do, P., Phan, T.H.: Developing a BERT based triple classification model using knowledge graph embedding for question answering system. Appl. Intell. 52(1), 636–651 (2022)CrossRef Do, P., Phan, T.H.: Developing a BERT based triple classification model using knowledge graph embedding for question answering system. Appl. Intell. 52(1), 636–651 (2022)CrossRef
15.
Zurück zum Zitat Duan, K., Du, S., Zhang, Y., Lin, Y., Wu, H., Zhang, Q.: Enhancement of question answering system accuracy via transfer learning and BERT. Appl. Sci. 12(22), 11522 (2022)CrossRef Duan, K., Du, S., Zhang, Y., Lin, Y., Wu, H., Zhang, Q.: Enhancement of question answering system accuracy via transfer learning and BERT. Appl. Sci. 12(22), 11522 (2022)CrossRef
19.
Zurück zum Zitat Evseev, D., Arkhipov, M.Y.: SPARQL query generation for complex question answering with BERT and BiLSTM-based model. Comput. Linguist. Intellect. Technol. 19(26), 270–282 (2020)CrossRef Evseev, D., Arkhipov, M.Y.: SPARQL query generation for complex question answering with BERT and BiLSTM-based model. Comput. Linguist. Intellect. Technol. 19(26), 270–282 (2020)CrossRef
20.
Zurück zum Zitat Fan, A., Jernite, Y., Perez, E., Grangier, D., Weston, J., Auli, M.: ELI5: long form question answering (2019). arXiv preprint arXiv:1907.09190 Fan, A., Jernite, Y., Perez, E., Grangier, D., Weston, J., Auli, M.: ELI5: long form question answering (2019). arXiv preprint arXiv:​1907.​09190
21.
Zurück zum Zitat Ferguson, J., Gardner, M., Hajishirzi, H., Khot, T., Dasigi, P.: IIRC: a dataset of incomplete information reading comprehension questions. arXiv preprint arXiv:2011.07127 (2020) Ferguson, J., Gardner, M., Hajishirzi, H., Khot, T., Dasigi, P.: IIRC: a dataset of incomplete information reading comprehension questions. arXiv preprint arXiv:​2011.​07127 (2020)
22.
Zurück zum Zitat Fisch, A., Talmor, A., Jia, R., Seo, M., Choi, E., Chen, D.: MRQA 2019 shared task: evaluating generalization in reading comprehension. In: Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 1–13 (2019) Fisch, A., Talmor, A., Jia, R., Seo, M., Choi, E., Chen, D.: MRQA 2019 shared task: evaluating generalization in reading comprehension. In: Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 1–13 (2019)
23.
Zurück zum Zitat Geva, M., Khashabi, D., Segal, E., Khot, T., Roth, D., Berant, J.: Did aristotle use a laptop? A question answering benchmark with implicit reasoning strategies. Trans. Assoc. Comput. Linguist. 9, 346–361 (2021)CrossRef Geva, M., Khashabi, D., Segal, E., Khot, T., Roth, D., Berant, J.: Did aristotle use a laptop? A question answering benchmark with implicit reasoning strategies. Trans. Assoc. Comput. Linguist. 9, 346–361 (2021)CrossRef
24.
Zurück zum Zitat Gupta, D., Kumari, S., Ekbal, A., Bhattacharyya, P.: MMQA: a multi-domain multi-lingual question-answering framework for English andHindi. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pp. 2777–2784 (2018) Gupta, D., Kumari, S., Ekbal, A., Bhattacharyya, P.: MMQA: a multi-domain multi-lingual question-answering framework for English andHindi. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation, pp. 2777–2784 (2018)
25.
26.
Zurück zum Zitat Guven, Z.A., Unalir, M.O.: Natural language based analysis of SQuAD: an analytical approach for BERT. Expert Syst. Appl. 195, 116592 (2022)CrossRef Guven, Z.A., Unalir, M.O.: Natural language based analysis of SQuAD: an analytical approach for BERT. Expert Syst. Appl. 195, 116592 (2022)CrossRef
27.
Zurück zum Zitat Joshi, M., Choi, E., Weld, D.S., Zettlemoyer, L.: TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601–1611 (2017) Joshi, M., Choi, E., Weld, D.S., Zettlemoyer, L.: TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601–1611 (2017)
28.
Zurück zum Zitat Khazaeli, S., et al.: A free format legal question answering system. In: Proceedings of the Natural Legal Language Processing Workshop 2021, pp. 107–113 (2021) Khazaeli, S., et al.: A free format legal question answering system. In: Proceedings of the Natural Legal Language Processing Workshop 2021, pp. 107–113 (2021)
29.
Zurück zum Zitat Khorashadizadeh, H., Monsefi, R., Foolad, S.: Attention-based convolutional neural network for answer selection using BERT. In: 2020 8th Iranian Joint Congress on Fuzzy and intelligent Systems, pp. 121–126. IEEE (2020) Khorashadizadeh, H., Monsefi, R., Foolad, S.: Attention-based convolutional neural network for answer selection using BERT. In: 2020 8th Iranian Joint Congress on Fuzzy and intelligent Systems, pp. 121–126. IEEE (2020)
30.
Zurück zum Zitat Kwiatkowski, T., et al.: Natural questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguist. 7, 453–466 (2019)CrossRef Kwiatkowski, T., et al.: Natural questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguist. 7, 453–466 (2019)CrossRef
31.
Zurück zum Zitat Lee, D., Choi, S., Jang, Y., Zhang, B.T.: Mounting video metadata on transformer-based language model for open-ended video question answering. arXiv preprint arXiv:2108.05158 (2021) Lee, D., Choi, S., Jang, Y., Zhang, B.T.: Mounting video metadata on transformer-based language model for open-ended video question answering. arXiv preprint arXiv:​2108.​05158 (2021)
32.
Zurück zum Zitat Liu, S., Huang, X.: A Chinese question answering system based on GPT. In: 2019 IEEE 10th International Conference on Software Engineering and Service Science, pp. 533–537. IEEE (2019) Liu, S., Huang, X.: A Chinese question answering system based on GPT. In: 2019 IEEE 10th International Conference on Software Engineering and Service Science, pp. 533–537. IEEE (2019)
33.
Zurück zum Zitat Marino, K., Rastegari, M., Farhadi, A., Mottaghi, R.: OK-VQA: a visual question answering benchmark requiring external knowledge. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3195–3204 (2019) Marino, K., Rastegari, M., Farhadi, A., Mottaghi, R.: OK-VQA: a visual question answering benchmark requiring external knowledge. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3195–3204 (2019)
34.
Zurück zum Zitat Miller, G.A.: WordNet: a lexical database forEnglish. Commun. ACM 38(11), 39–41 (1995)CrossRef Miller, G.A.: WordNet: a lexical database forEnglish. Commun. ACM 38(11), 39–41 (1995)CrossRef
35.
36.
Zurück zum Zitat Pereira, J., Fidalgo, R., Lotufo, R., Nogueira, R.: Visconde: Multi-document QA with GPT-3 and neural reranking. arXiv preprint arXiv:2212.09656 (2022) Pereira, J., Fidalgo, R., Lotufo, R., Nogueira, R.: Visconde: Multi-document QA with GPT-3 and neural reranking. arXiv preprint arXiv:​2212.​09656 (2022)
38.
Zurück zum Zitat Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019) Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)
39.
Zurück zum Zitat Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485–5551 (2020)MathSciNet Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 21(1), 5485–5551 (2020)MathSciNet
40.
Zurück zum Zitat Rajpurkar, P., Jia, R., Liang, P.: Know what you don’t know: unanswerable questions for SQuAD. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 784–789 (2018) Rajpurkar, P., Jia, R., Liang, P.: Know what you don’t know: unanswerable questions for SQuAD. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 784–789 (2018)
41.
Zurück zum Zitat Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392 (2016) Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392 (2016)
42.
Zurück zum Zitat Tan, Y., Min, D., Li, Y., Li, W., Hu, N., Chen, Y., Qi, G.: Evaluation of ChatGPT as a question answering system for answering complex questions (2023). arXiv preprint arXiv:2303.07992 Tan, Y., Min, D., Li, Y., Li, W., Hu, N., Chen, Y., Qi, G.: Evaluation of ChatGPT as a question answering system for answering complex questions (2023). arXiv preprint arXiv:​2303.​07992
43.
Zurück zum Zitat Trang, N.T.M., Shcherbakov, M.: Vietnamese question answering system from multilingual BERT models to monolingual BERT model. In: 2020 9th International Conference System Modeling and Advancement in Research Trends (SMART), pp. 201–206. IEEE (2020) Trang, N.T.M., Shcherbakov, M.: Vietnamese question answering system from multilingual BERT models to monolingual BERT model. In: 2020 9th International Conference System Modeling and Advancement in Research Trends (SMART), pp. 201–206. IEEE (2020)
44.
Zurück zum Zitat Trischler, A., et al.: NewsQA: a machine comprehension dataset. In: Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 191–200 (2017) Trischler, A., et al.: NewsQA: a machine comprehension dataset. In: Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 191–200 (2017)
45.
Zurück zum Zitat Trivedi, H., Balasubramanian, N., Khot, T., Sabharwal, A.: MuSiQue: multihop questions via single-hop question composition. Trans. Assoc. Comput. Linguist. 10, 539–554 (2022)CrossRef Trivedi, H., Balasubramanian, N., Khot, T., Sabharwal, A.: MuSiQue: multihop questions via single-hop question composition. Trans. Assoc. Comput. Linguist. 10, 539–554 (2022)CrossRef
46.
Zurück zum Zitat Wang, H., Wu, H., Zhu, H., Miao, Y., Wang, Q., Qiao, S., Zhao, H., Chen, C., Zhang, J.: A residual LSTM and Seq2Seq neural network based on GPT for Chinese rice-related question and answer system. Agriculture 12(6), 813 (2022)CrossRef Wang, H., Wu, H., Zhu, H., Miao, Y., Wang, Q., Qiao, S., Zhao, H., Chen, C., Zhang, J.: A residual LSTM and Seq2Seq neural network based on GPT for Chinese rice-related question and answer system. Agriculture 12(6), 813 (2022)CrossRef
47.
Zurück zum Zitat Widad, A., El Habib, B.L., et al.: BERT for question answering applied on COVID-19. Procedia Comput. Sci. 198, 379–384 (2022)CrossRef Widad, A., El Habib, B.L., et al.: BERT for question answering applied on COVID-19. Procedia Comput. Sci. 198, 379–384 (2022)CrossRef
48.
Zurück zum Zitat Wu, J., Liu, J., Luo, X.: Few-shot legal knowledge question answering system for COVID-19 epidemic. In: 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence, pp. 1–6 (2020) Wu, J., Liu, J., Luo, X.: Few-shot legal knowledge question answering system for COVID-19 epidemic. In: 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence, pp. 1–6 (2020)
49.
Zurück zum Zitat Yang, W., et al.: End-to-end open-domain question answering with BERTserini. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 72–77 (2019) Yang, W., et al.: End-to-end open-domain question answering with BERTserini. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 72–77 (2019)
50.
Zurück zum Zitat Yang, Y., Yih, W.t., Meek, C.: WIKIQA: a challenge dataset for open-domain question answering. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2013–2018 (2015) Yang, Y., Yih, W.t., Meek, C.: WIKIQA: a challenge dataset for open-domain question answering. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2013–2018 (2015)
51.
Zurück zum Zitat Yang, Z., et al.: An empirical study of GPT-3 for few-shot knowledge-based VQA. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 3081–3089 (2022) Yang, Z., et al.: An empirical study of GPT-3 for few-shot knowledge-based VQA. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, pp. 3081–3089 (2022)
52.
Zurück zum Zitat Yang, Z., et al.: HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369–2380 (2018) Yang, Z., et al.: HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369–2380 (2018)
53.
Zurück zum Zitat Yin, J.: Research on question answering system based on BERT model. In: 2022 3rd International Conference on Computer Vision, Image and Deep Learning & International Conference on Computer Engineering and Applications, pp. 68–71. IEEE (2022) Yin, J.: Research on question answering system based on BERT model. In: 2022 3rd International Conference on Computer Vision, Image and Deep Learning & International Conference on Computer Engineering and Applications, pp. 68–71. IEEE (2022)
56.
Zurück zum Zitat Zhang, N.N., Xing, Y.: Questions and answers on legal texts based on BERT-BiGRU. J. Phys: Conf. Ser. 1828(1), 012035 (2021)MathSciNet Zhang, N.N., Xing, Y.: Questions and answers on legal texts based on BERT-BiGRU. J. Phys: Conf. Ser. 1828(1), 012035 (2021)MathSciNet
57.
Zurück zum Zitat Zhao, X., Li, Z., Wu, S., Zhan, Y., Zhang, C.: Deep text matching in medical question answering system. In: ACM ICEA’21: Proceedings of the 2021 ACM International Conference on Intelligent Computing and Its Emerging Applications, pp. 134–138 (2021) Zhao, X., Li, Z., Wu, S., Zhan, Y., Zhang, C.: Deep text matching in medical question answering system. In: ACM ICEA’21: Proceedings of the 2021 ACM International Conference on Intelligent Computing and Its Emerging Applications, pp. 134–138 (2021)
58.
Zurück zum Zitat Zhou, S., Zhang, Y.: DATLMedQA: a data augmentation and transfer learning based solution for medical question answering. Appl. Sci. 11(23), 11251 (2021)CrossRef Zhou, S., Zhang, Y.: DATLMedQA: a data augmentation and transfer learning based solution for medical question answering. Appl. Sci. 11(23), 11251 (2021)CrossRef
59.
Zurück zum Zitat Zhu, J., Wu, J., Luo, X., Liu, J.: Semantic matching based legal information retrieval system for COVID-19 pandemic. Artificial Intelligence and Law, pp. 1–30 (2023) Zhu, J., Wu, J., Luo, X., Liu, J.: Semantic matching based legal information retrieval system for COVID-19 pandemic. Artificial Intelligence and Law, pp. 1–30 (2023)
60.
Zurück zum Zitat Zihayat, M., Etwaroo, R.: A non-factoid question answering system for prior art search. Expert Syst. Appl. 177, 114910 (2021)CrossRef Zihayat, M., Etwaroo, R.: A non-factoid question answering system for prior art search. Expert Syst. Appl. 177, 114910 (2021)CrossRef
Metadaten
Titel
Question Answering Systems Based on Pre-trained Language Models: Recent Progress
verfasst von
Xudong Luo
Ying Luo
Binxia Yang
Copyright-Jahr
2024
DOI
https://doi.org/10.1007/978-3-031-57808-3_13