Skip to main content
Erschienen in: Neural Computing and Applications 14/2024

20.02.2024 | Original Article

Detecting fake information with knowledge-enhanced AutoPrompt

verfasst von: Xun Che, Gang Yang, Yadang Chen, Qianmu Li

Erschienen in: Neural Computing and Applications | Ausgabe 14/2024

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The rapid growth of fake news on the Internet poses a challenge to accessing authentic information. Fake news detection plays a crucial role in filtering out false information and improving information accuracy. However, practical implementation faces challenges like high annotation costs, limited samples, poor training results, and weak model generalization. To tackle these issues, we propose knowledge-enhanced AutoPrompt (KEAP) for fake news detection. This method leverages prompt templates generated by the T5 model to transform the fake news detection task into a prompt learning-based task. We also incorporate external entity knowledge to enhance detection capabilities. Carefully designed prompts activate the model’s latent knowledge, improving performance in low-resource scenarios and model generalization. Experiments on GossipCop and PolitiFact datasets demonstrate the superiority of prompt learning over existing methods without extra text during testing. KEAP achieves an average F1 score improvement of 2.79 \(\%\) compared to state-of-the-art methods in few-shot settings.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Shu K, Mahudeswaran D, Wang S, Lee D, Liu H (2020) Fakenewsnet: a data repository with news content, social context, and spatiotemporal information for studying fake news on social media. Big data 8(3):171–188CrossRef Shu K, Mahudeswaran D, Wang S, Lee D, Liu H (2020) Fakenewsnet: a data repository with news content, social context, and spatiotemporal information for studying fake news on social media. Big data 8(3):171–188CrossRef
2.
Zurück zum Zitat Han X, Zhang Z, Ding N, Gu Y, Liu X, Huo Y, Qiu J, Zhang L, Han W, Huang M, et al (2021) Pre-trained models: past, present and future. AI Open Han X, Zhang Z, Ding N, Gu Y, Liu X, Huo Y, Qiu J, Zhang L, Han W, Huang M, et al (2021) Pre-trained models: past, present and future. AI Open
4.
Zurück zum Zitat Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21(1):5485–5551MathSciNet Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21(1):5485–5551MathSciNet
5.
Zurück zum Zitat Zhang X, Ghorbani AA (2020) An overview of online fake news: characterization, detection, and discussion. Inf Process Manag 57(2):102025CrossRef Zhang X, Ghorbani AA (2020) An overview of online fake news: characterization, detection, and discussion. Inf Process Manag 57(2):102025CrossRef
6.
Zurück zum Zitat Kaliyar RK, Goswami A, Narang P (2021) Echofaked: improving fake news detection in social media with an efficient deep neural network. Neural Comput Appl 33(14):8597–8613CrossRef Kaliyar RK, Goswami A, Narang P (2021) Echofaked: improving fake news detection in social media with an efficient deep neural network. Neural Comput Appl 33(14):8597–8613CrossRef
7.
Zurück zum Zitat Inan E (2022) Zoka: a fake news detection method using edge-weighted graph attention network with transfer models. Neural Comput Appl, 1–9 Inan E (2022) Zoka: a fake news detection method using edge-weighted graph attention network with transfer models. Neural Comput Appl, 1–9
8.
Zurück zum Zitat Meel P, Vishwakarma DK (2020) Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl 153:112986CrossRef Meel P, Vishwakarma DK (2020) Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl 153:112986CrossRef
9.
Zurück zum Zitat Zhou X, Zafarani R (2020) A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 53(5):1–40CrossRef Zhou X, Zafarani R (2020) A survey of fake news: fundamental theories, detection methods, and opportunities. ACM Comput Surv (CSUR) 53(5):1–40CrossRef
10.
Zurück zum Zitat Przybyla P (2020) Capturing the style of fake news. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 490–497 Przybyla P (2020) Capturing the style of fake news. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 490–497
11.
Zurück zum Zitat Zhang X, Cao J, Li X, Sheng Q, Zhong L, Shu K (2021) Mining dual emotion for fake news detection. In: Proceedings of the web conference 2021, pp 3465–3476 Zhang X, Cao J, Li X, Sheng Q, Zhong L, Shu K (2021) Mining dual emotion for fake news detection. In: Proceedings of the web conference 2021, pp 3465–3476
12.
Zurück zum Zitat Ajao O, Bhowmik D, Zargari S (2019) Sentiment aware fake news detection on online social networks. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2507–2511. IEEE Ajao O, Bhowmik D, Zargari S (2019) Sentiment aware fake news detection on online social networks. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 2507–2511. IEEE
13.
Zurück zum Zitat Ozbay FA, Alatas B (2021) Adaptive salp swarm optimization algorithms with inertia weights for novel fake news detection model in online social media. Multimedia Tools Appl 80(26–27):34333–34357CrossRef Ozbay FA, Alatas B (2021) Adaptive salp swarm optimization algorithms with inertia weights for novel fake news detection model in online social media. Multimedia Tools Appl 80(26–27):34333–34357CrossRef
14.
Zurück zum Zitat Ozbay FA, Alatas B (2019) A novel approach for detection of fake news on social media using metaheuristic optimization algorithms. Elektronika ir Elektrotechnika 25(4):62–67CrossRef Ozbay FA, Alatas B (2019) A novel approach for detection of fake news on social media using metaheuristic optimization algorithms. Elektronika ir Elektrotechnika 25(4):62–67CrossRef
15.
Zurück zum Zitat Bahad P, Saxena P, Kamal R (2019) Fake news detection using bi-directional lstm-recurrent neural network. Procedia Comput Sci 165:74–82CrossRef Bahad P, Saxena P, Kamal R (2019) Fake news detection using bi-directional lstm-recurrent neural network. Procedia Comput Sci 165:74–82CrossRef
16.
Zurück zum Zitat Sheng Q, Zhang X, Cao J, Zhong L (2021) Integrating pattern-and fact-based fake news detection via model preference learning. In: Proceedings of the 30th ACM international conference on information & knowledge management, pp 1640–1650 Sheng Q, Zhang X, Cao J, Zhong L (2021) Integrating pattern-and fact-based fake news detection via model preference learning. In: Proceedings of the 30th ACM international conference on information & knowledge management, pp 1640–1650
17.
Zurück zum Zitat Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: International conference on machine learning, pp 1188–1196. PMLR Le Q, Mikolov T (2014) Distributed representations of sentences and documents. In: International conference on machine learning, pp 1188–1196. PMLR
18.
Zurück zum Zitat Bengio Y, Ducharme R, Vincent P (2000) A neural probabilistic language model. Adv Neural Inf Process Syst 13 Bengio Y, Ducharme R, Vincent P (2000) A neural probabilistic language model. Adv Neural Inf Process Syst 13
19.
Zurück zum Zitat Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 Mikolov T, Chen K, Corrado G, Dean J (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv:​1301.​3781
20.
Zurück zum Zitat Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:​1810.​04805
21.
Zurück zum Zitat Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30 Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30
22.
Zurück zum Zitat Dai Z, Yang Z, Yang Y, Carbonell J, Le QV, Salakhutdinov R (2019) Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 Dai Z, Yang Z, Yang Y, Carbonell J, Le QV, Salakhutdinov R (2019) Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:​1901.​02860
23.
Zurück zum Zitat Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:​1907.​11692
24.
Zurück zum Zitat Potamias RA, Siolas G, Stafylopatis A-G (2020) A transformer-based approach to irony and sarcasm detection. Neural Comput Appl 32(23):17309–17320CrossRef Potamias RA, Siolas G, Stafylopatis A-G (2020) A transformer-based approach to irony and sarcasm detection. Neural Comput Appl 32(23):17309–17320CrossRef
25.
Zurück zum Zitat Qiao B, Zou Z, Huang Y, Fang K, Zhu X, Chen Y (2022) A joint model for entity and relation extraction based on bert. Neural Comput Appl 34(5):3471–3481CrossRef Qiao B, Zou Z, Huang Y, Fang K, Zhu X, Chen Y (2022) A joint model for entity and relation extraction based on bert. Neural Comput Appl 34(5):3471–3481CrossRef
26.
Zurück zum Zitat Liu J, Shen D, Zhang Y, Dolan B, Carin L, Chen W (2021) What makes good in-context examples for gpt-\(3\)? arXiv preprint arXiv:2101.06804 Liu J, Shen D, Zhang Y, Dolan B, Carin L, Chen W (2021) What makes good in-context examples for gpt-\(3\)? arXiv preprint arXiv:​2101.​06804
27.
Zurück zum Zitat Zhao Z, Wallace E, Feng S, Klein D, Singh S (2021) Calibrate before use: improving few-shot performance of language models. In: International conference on machine learning, pp 12697–12706. PMLR Zhao Z, Wallace E, Feng S, Klein D, Singh S (2021) Calibrate before use: improving few-shot performance of language models. In: International conference on machine learning, pp 12697–12706. PMLR
28.
29.
Zurück zum Zitat Jiang Z, Anastasopoulos A, Araki J, Ding H, Neubig G (2020) X-factr: Multilingual factual knowledge retrieval from pretrained language models. arXiv preprint arXiv:2010.06189 Jiang Z, Anastasopoulos A, Araki J, Ding H, Neubig G (2020) X-factr: Multilingual factual knowledge retrieval from pretrained language models. arXiv preprint arXiv:​2010.​06189
30.
Zurück zum Zitat Kassner N, Dufter P, Schütze H (2021) Multilingual lama: Investigating knowledge in multilingual pretrained language models. arXiv preprint arXiv:2102.00894 Kassner N, Dufter P, Schütze H (2021) Multilingual lama: Investigating knowledge in multilingual pretrained language models. arXiv preprint arXiv:​2102.​00894
31.
Zurück zum Zitat Shin T, Razeghi Y, Logan IV RL, Wallace E, Singh S (2020) Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980 Shin T, Razeghi Y, Logan IV RL, Wallace E, Singh S (2020) Autoprompt: Eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:​2010.​15980
34.
Zurück zum Zitat Hewitt J, Manning CD (2019) A structural probe for finding syntax in word representations. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, Volume 1 (Long and Short Papers), pp 4129–4138 Hewitt J, Manning CD (2019) A structural probe for finding syntax in word representations. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, Volume 1 (Long and Short Papers), pp 4129–4138
35.
Zurück zum Zitat Petroni F, Rocktäschel T, Lewis P, Bakhtin A, Wu Y, Miller AH, Riedel S (2019) Language models as knowledge bases? arXiv preprint arXiv:1909.01066 Petroni F, Rocktäschel T, Lewis P, Bakhtin A, Wu Y, Miller AH, Riedel S (2019) Language models as knowledge bases? arXiv preprint arXiv:​1909.​01066
36.
Zurück zum Zitat Jiang Z, Xu FF, Araki J, Neubig G (2020) How can we know what language models know? Trans Assoc Comput Linguist 8:423–438CrossRef Jiang Z, Xu FF, Araki J, Neubig G (2020) How can we know what language models know? Trans Assoc Comput Linguist 8:423–438CrossRef
37.
Zurück zum Zitat Ferragina P, Scaiella U (2010) Tagme: On-the-fly annotation of short text fragments (by wikipedia entities). In: Proceedings of the 19th ACM international conference on information and knowledge management. CIKM ’10, pp 1625–1628. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/1871437.1871689 Ferragina P, Scaiella U (2010) Tagme: On-the-fly annotation of short text fragments (by wikipedia entities). In: Proceedings of the 19th ACM international conference on information and knowledge management. CIKM ’10, pp 1625–1628. Association for Computing Machinery, New York, NY, USA. https://​doi.​org/​10.​1145/​1871437.​1871689
39.
Zurück zum Zitat Shu K, Sliva A, Wang S, Tang J, Liu H (2017) Fake news detection on social media: a data mining perspective. ACM SIGKDD Explorations Newsl 19(1):22–36CrossRef Shu K, Sliva A, Wang S, Tang J, Liu H (2017) Fake news detection on social media: a data mining perspective. ACM SIGKDD Explorations Newsl 19(1):22–36CrossRef
40.
Zurück zum Zitat Piccinno F, Ferragina P (2014) From tagme to wat: a new entity annotator. In: Proceedings of the first international workshop on entity recognition & disambiguation, pp 55–62 Piccinno F, Ferragina P (2014) From tagme to wat: a new entity annotator. In: Proceedings of the first international workshop on entity recognition & disambiguation, pp 55–62
41.
Zurück zum Zitat Castillo C, Mendoza M, Poblete B (2011) Information credibility on twitter. In: Proceedings of the 20th international conference on world wide web, pp 675–684 Castillo C, Mendoza M, Poblete B (2011) Information credibility on twitter. In: Proceedings of the 20th international conference on world wide web, pp 675–684
42.
Zurück zum Zitat Kwon S, Cha M, Jung K, Chen W, Wang Y (2013) Prominent features of rumor propagation in online social media. In: 2013 IEEE 13th international conference on data mining, pp 1103–1108 . IEEE Kwon S, Cha M, Jung K, Chen W, Wang Y (2013) Prominent features of rumor propagation in online social media. In: 2013 IEEE 13th international conference on data mining, pp 1103–1108 . IEEE
43.
Zurück zum Zitat Yang F, Liu Y, Yu X, Yang M (2012) Automatic detection of rumor on sina weibo. In: Proceedings of the ACM SIGKDD workshop on mining data semantics, pp 1–7 Yang F, Liu Y, Yu X, Yang M (2012) Automatic detection of rumor on sina weibo. In: Proceedings of the ACM SIGKDD workshop on mining data semantics, pp 1–7
44.
Zurück zum Zitat Zhang Y, Wallace B (2015) A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:1510.03820 Zhang Y, Wallace B (2015) A sensitivity analysis of (and practitioners’ guide to) convolutional neural networks for sentence classification. arXiv preprint arXiv:​1510.​03820
45.
Zurück zum Zitat Wang H, Zhang F, Xie X, Guo M (2018) Dkn: Deep knowledge-aware network for news recommendation. In: Proceedings of the 2018 world wide web conference, pp 1835–1844 Wang H, Zhang F, Xie X, Guo M (2018) Dkn: Deep knowledge-aware network for news recommendation. In: Proceedings of the 2018 world wide web conference, pp 1835–1844
46.
Zurück zum Zitat Dun Y, Tu K, Chen C, Hou C, Yuan X (2021) Kan: Knowledge-aware attention network for fake news detection. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 81–89 Dun Y, Tu K, Chen C, Hou C, Yuan X (2021) Kan: Knowledge-aware attention network for fake news detection. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 81–89
47.
Zurück zum Zitat Schick T, Schütze H (2020) Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:2001.07676 Schick T, Schütze H (2020) Exploiting cloze questions for few shot text classification and natural language inference. arXiv preprint arXiv:​2001.​07676
48.
Zurück zum Zitat Sarzynska-Wawer J, Wawer A, Pawlak A, Szymanowska J, Stefaniak I, Jarkiewicz M, Okruszek L (2021) Detecting formal thought disorder by deep contextualized word representations. Psychiatry Res 304:114135CrossRef Sarzynska-Wawer J, Wawer A, Pawlak A, Szymanowska J, Stefaniak I, Jarkiewicz M, Okruszek L (2021) Detecting formal thought disorder by deep contextualized word representations. Psychiatry Res 304:114135CrossRef
49.
Zurück zum Zitat Kaliyar RK, Goswami A, Narang P (2021) Fakebert: Fake news detection in social media with a bert-based deep learning approach. Multimedia Tools Appl 80(8):11765–11788CrossRef Kaliyar RK, Goswami A, Narang P (2021) Fakebert: Fake news detection in social media with a bert-based deep learning approach. Multimedia Tools Appl 80(8):11765–11788CrossRef
Metadaten
Titel
Detecting fake information with knowledge-enhanced AutoPrompt
verfasst von
Xun Che
Gang Yang
Yadang Chen
Qianmu Li
Publikationsdatum
20.02.2024
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 14/2024
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-024-09491-7

Weitere Artikel der Ausgabe 14/2024

Neural Computing and Applications 14/2024 Zur Ausgabe

Premium Partner