Skip to main content
Top
Published in: The Journal of Supercomputing 18/2023

15-06-2023

Graph-enhanced multi-answer summarization under question-driven guidance

Authors: Bing Li, Peng Yang, Zhongjian Hu, Yuankang Sun, Meng Yi

Published in: The Journal of Supercomputing | Issue 18/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Multi-answer summarization for question-based queries in community Q&A requires comprehensive and in-depth analysis of lengthy and extensive information to generate concise and comprehensive answer summarization. Guided by the questions, capturing the relationships among candidate answers significantly benefits detecting salient information from multiple answers and generating an overall coherent summarization. In this paper, we propose a new Graph-enhanced Multi-answer Summarization under Question-driven Guidance model that enables explicit handling of the salience and redundancy of answer information. Specifically, the model first fully incorporates a pre-trained model to learn linguistic features through encoding and focuses on the role of questions in guiding answer generation during the encoding phase. The questions are utilized to explicitly constrain individual answers to ensure that the model more accurately identifies the information closely related to the questions in the answers and allocates more attention. Moreover, we utilize the question-driven answer graph information for encoding to capture the modeling relationships between answers and remove information redundancy. Finally, the graph-encoded information is exploited in the decoding stage to guide the generation of summaries to guarantee the informativeness, fluency, and conciseness of the summarization. Experimental results show that our proposed model brings substantial improvements compared to the state-of-the-art baseline, achieving the best outcomes on both of the community datasets ELI5 and MEDIQA, demonstrating the effectiveness of our model.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literature
2.
go back to reference Zhang J, Tan J, Wan X (2018) Adapting neural single-document summarization model for abstractive multi-document summarization: a pilot study. In: Proceedings of the 11th International Conference on Natural Language Generation, pp 381–390. https://doi.org/10.18653/v1/w18-6545 Zhang J, Tan J, Wan X (2018) Adapting neural single-document summarization model for abstractive multi-document summarization: a pilot study. In: Proceedings of the 11th International Conference on Natural Language Generation, pp 381–390. https://​doi.​org/​10.​18653/​v1/​w18-6545
3.
go back to reference Baumel T, Eyal M, Elhadad M (2018) Query focused abstractive summarization: incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models. CoRR arXiv:abs/1801.07704 Baumel T, Eyal M, Elhadad M (2018) Query focused abstractive summarization: incorporating query relevance, multi-document coverage, and summary length constraints into seq2seq models. CoRR arXiv:​abs/​1801.​07704
5.
go back to reference Cohan A, Dernoncourt F, Kim DS, Bui T, Kim S, Chang W, Goharian N (2018) A discourse-aware attention model for abstractive summarization of long documents. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, pp 615–621. https://doi.org/10.18653/v1/n18-2097 Cohan A, Dernoncourt F, Kim DS, Bui T, Kim S, Chang W, Goharian N (2018) A discourse-aware attention model for abstractive summarization of long documents. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, pp 615–621. https://​doi.​org/​10.​18653/​v1/​n18-2097
7.
go back to reference Fabbri AR, Li I, She T, Li S, Radev DR (2019) Multi-news: a large-scale multi-document summarization dataset and abstractive hierarchical model. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pp 1074–1084. https://doi.org/10.18653/v1/p19-1102 Fabbri AR, Li I, She T, Li S, Radev DR (2019) Multi-news: a large-scale multi-document summarization dataset and abstractive hierarchical model. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pp 1074–1084. https://​doi.​org/​10.​18653/​v1/​p19-1102
10.
go back to reference Chen Y, Bansal M (2018) Fast abstractive summarization with reinforce-selected sentence rewriting. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, pp 675–686. https://aclanthology.org/P18-1063/ Chen Y, Bansal M (2018) Fast abstractive summarization with reinforce-selected sentence rewriting. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, pp 675–686. https://​aclanthology.​org/​P18-1063/​
11.
go back to reference Subramanian S, Li R, Pilault J, Pal CJ (2019) On extractive and abstractive neural document summarization with transformer language models. CoRR arXiv:abs/1909.03186 Subramanian S, Li R, Pilault J, Pal CJ (2019) On extractive and abstractive neural document summarization with transformer language models. CoRR arXiv:​abs/​1909.​03186
12.
go back to reference Nguyen T, Rosenberg M, Song X, Gao J, Tiwary S, Majumder R, Deng L (2016) MS MARCO: a human generated machine reading comprehension dataset. In: Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches 2016 Co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). CEUR Workshop Proceedings, vol 1773. http://ceur-ws.org/Vol-1773/CoCoNIPS_2016_paper9.pdf Nguyen T, Rosenberg M, Song X, Gao J, Tiwary S, Majumder R, Deng L (2016) MS MARCO: a human generated machine reading comprehension dataset. In: Proceedings of the Workshop on Cognitive Computation: Integrating Neural and Symbolic Approaches 2016 Co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). CEUR Workshop Proceedings, vol 1773. http://​ceur-ws.​org/​Vol-1773/​CoCoNIPS_​2016_​paper9.​pdf
13.
go back to reference Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) Squad: 100, 000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pp 2383–2392. https://doi.org/10.18653/v1/d16-1264 Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) Squad: 100, 000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, pp 2383–2392. https://​doi.​org/​10.​18653/​v1/​d16-1264
14.
go back to reference Chopra S, Auli M, Rush AM (2016) Abstractive sentence summarization with attentive recurrent neural networks. In: NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp 93–98. https://doi.org/10.18653/v1/n16-1012 Chopra S, Auli M, Rush AM (2016) Abstractive sentence summarization with attentive recurrent neural networks. In: NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp 93–98. https://​doi.​org/​10.​18653/​v1/​n16-1012
15.
go back to reference Nallapati R, Zhou B, dos Santos CN, Gülçehre Ç, Xiang B (2016) Abstractive text summarization using sequence-to-sequence rnns and beyond. In: Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, pp 280–290. https://doi.org/10.18653/v1/k16-1028 Nallapati R, Zhou B, dos Santos CN, Gülçehre Ç, Xiang B (2016) Abstractive text summarization using sequence-to-sequence rnns and beyond. In: Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, CoNLL 2016, pp 280–290. https://​doi.​org/​10.​18653/​v1/​k16-1028
16.
18.
19.
22.
go back to reference Haghighi A, Vanderwende L (2009) Exploring content models for multi-document summarization. In: Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, pp 362–370. https://aclanthology.org/N09-1041/ Haghighi A, Vanderwende L (2009) Exploring content models for multi-document summarization. In: Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, pp 362–370. https://​aclanthology.​org/​N09-1041/​
23.
go back to reference Christensen J, Mausam Soderland S, Etzioni O (2013) Towards coherent multi-document summarization. In: Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, pp 1163–1173. https://aclanthology.org/N13-1136/ Christensen J, Mausam Soderland S, Etzioni O (2013) Towards coherent multi-document summarization. In: Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings, pp 1163–1173. https://​aclanthology.​org/​N13-1136/​
25.
go back to reference Lin C-Y (2004) Rouge: a package for automatic evaluation of summaries. In: Text summarization branches out, pp 74–81 Lin C-Y (2004) Rouge: a package for automatic evaluation of summaries. In: Text summarization branches out, pp 74–81
28.
go back to reference Celikyilmaz A, Bosselut A, He X, Choi Y (2018) Deep communicating agents for abstractive summarization. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, pp 1662–1675. https://doi.org/10.18653/v1/n18-1150 Celikyilmaz A, Bosselut A, He X, Choi Y (2018) Deep communicating agents for abstractive summarization. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2018, pp 1662–1675. https://​doi.​org/​10.​18653/​v1/​n18-1150
29.
31.
go back to reference Ganesan K, Zhai C, Han J (2010) Opinosis: a graph based approach to abstractive summarization of highly redundant opinions. In: COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, pp 340–348. https://aclanthology.org/C10-1039/ Ganesan K, Zhai C, Han J (2010) Opinosis: a graph based approach to abstractive summarization of highly redundant opinions. In: COLING 2010, 23rd International Conference on Computational Linguistics, Proceedings of the Conference, pp 340–348. https://​aclanthology.​org/​C10-1039/​
32.
go back to reference Liao K, Lebanoff L, Liu F (2018) Abstract meaning representation for multi-document summarization. In: Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, pp 1178–1190. https://aclanthology.org/C18-1101/ Liao K, Lebanoff L, Liu F (2018) Abstract meaning representation for multi-document summarization. In: Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, pp 1178–1190. https://​aclanthology.​org/​C18-1101/​
33.
go back to reference Fan A, Gardent C, Braud C, Bordes A (2019) Using local knowledge graph construction to scale seq2seq models to multi-document inputs. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pp 4184–4194. https://doi.org/10.18653/v1/D19-1428 Fan A, Gardent C, Braud C, Bordes A (2019) Using local knowledge graph construction to scale seq2seq models to multi-document inputs. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pp 4184–4194. https://​doi.​org/​10.​18653/​v1/​D19-1428
36.
go back to reference Li W, Xu J, He Y, Yan S, Wu Y, Sun X (2019) Coherent comments generation for chinese articles with a graph-to-sequence model. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pp 4843–4852. https://doi.org/10.18653/v1/p19-1479 Li W, Xu J, He Y, Yan S, Wu Y, Sun X (2019) Coherent comments generation for chinese articles with a graph-to-sequence model. In: Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, pp 4843–4852. https://​doi.​org/​10.​18653/​v1/​p19-1479
39.
go back to reference Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pp 4171–4186. https://doi.org/10.18653/v1/n19-1423 Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pp 4171–4186. https://​doi.​org/​10.​18653/​v1/​n19-1423
40.
go back to reference Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I et al (2019) Language models are unsupervised multitask learners. OpenAI blog 1(8):9 Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I et al (2019) Language models are unsupervised multitask learners. OpenAI blog 1(8):9
41.
go back to reference Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, pp 7871–7880. https://doi.org/10.18653/v1/2020.acl-main.703 Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, pp 7871–7880. https://​doi.​org/​10.​18653/​v1/​2020.​acl-main.​703
43.
45.
go back to reference Liu Y, Lapata M (2019) Text summarization with pretrained encoders. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pp 3728–3738. https://doi.org/10.18653/v1/D19-1387 Liu Y, Lapata M (2019) Text summarization with pretrained encoders. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pp 3728–3738. https://​doi.​org/​10.​18653/​v1/​D19-1387
50.
go back to reference Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: a robustly optimized BERT pretraining approach. CoRR arXiv:abs/1907.11692 Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: a robustly optimized BERT pretraining approach. CoRR arXiv:​abs/​1907.​11692
55.
go back to reference Kenton JDM-WC, Toutanova LK (2019) Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, vol 1, p 2 Kenton JDM-WC, Toutanova LK (2019) Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, vol 1, p 2
Metadata
Title
Graph-enhanced multi-answer summarization under question-driven guidance
Authors
Bing Li
Peng Yang
Zhongjian Hu
Yuankang Sun
Meng Yi
Publication date
15-06-2023
Publisher
Springer US
Published in
The Journal of Supercomputing / Issue 18/2023
Print ISSN: 0920-8542
Electronic ISSN: 1573-0484
DOI
https://doi.org/10.1007/s11227-023-05457-z

Other articles of this Issue 18/2023

The Journal of Supercomputing 18/2023 Go to the issue

Premium Partner