Skip to main content

2025 | OriginalPaper | Buchkapitel

Did You Tell a Deadly Lie? Evaluating Large Language Models for Health Misinformation Identification

verfasst von : Surendrabikram Thapa, Kritesh Rauniyar, Hariram Veeramani, Aditya Shah, Imran Razzak, Usman Naseem

Erschienen in: Web Information Systems Engineering – WISE 2024

Verlag: Springer Nature Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The rapid spread of health misinformation online poses significant challenges to public health, potentially leading to confusion, undermining trust in health authorities, and hindering effective health interventions. Large Language Models (LLMs) have shown promise in various natural language processing tasks, including misinformation detection. However, their effectiveness in identifying health-specific misinformation has not been extensively benchmarked. This study evaluates the performance of seven state-of-the-art LLMs - GPT-3.5, GPT-4, Gemini, Flan-T5 XL, Gemma, LLaMA-2, and Mistral - on the task of health misinformation detection across four datasets (Monkeypox-V1, Monkeypox-V2, COVID-19, and CoAID). The models were tested under five different settings: zero-shot classification, 5-shot random examples, 10-shot random examples, 5-shot sampled examples, and 10-shot sampled examples. Performance was evaluated using macro F1-score, and inter-model agreement was assessed using Cohen’s Kappa and Fleiss’ Kappa scores. By comprehensively benchmarking these LLMs, this study aims to determine which models excel in particular scenarios and provide insights into their potential for combating health misinformation in online environments.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Alenezi, M.N., Alqenaei, Z.M.: Machine learning in detecting covid-19 misinformation on twitter. Future Internet 13(10), 244 (2021)CrossRef Alenezi, M.N., Alqenaei, Z.M.: Machine learning in detecting covid-19 misinformation on twitter. Future Internet 13(10), 244 (2021)CrossRef
2.
Zurück zum Zitat Baek, J., Aji, A.F., Saffari, A.: Knowledge-augmented language model prompting for zero-shot knowledge graph question answering. arXiv preprint arXiv:2306.04136 (2023) Baek, J., Aji, A.F., Saffari, A.: Knowledge-augmented language model prompting for zero-shot knowledge graph question answering. arXiv preprint arXiv:​2306.​04136 (2023)
3.
Zurück zum Zitat Baktash, J.A., Dawodi, M.: GPT-4: a review on advancements and opportunities in natural language processing. arXiv preprint arXiv:2305.03195 (2023) Baktash, J.A., Dawodi, M.: GPT-4: a review on advancements and opportunities in natural language processing. arXiv preprint arXiv:​2305.​03195 (2023)
5.
Zurück zum Zitat Bhattacharjee, A., Moraffah, R., Garland, J., Liu, H.: Zero-shot LLM-guided Counterfactual Generation for Text. arXiv preprint arXiv:2405.04793 (2024) Bhattacharjee, A., Moraffah, R., Garland, J., Liu, H.: Zero-shot LLM-guided Counterfactual Generation for Text. arXiv preprint arXiv:​2405.​04793 (2024)
6.
Zurück zum Zitat Boissonneault, D., Hensen, E.: Fake news detection with large language models on the liar dataset (2024) Boissonneault, D., Hensen, E.: Fake news detection with large language models on the liar dataset (2024)
7.
Zurück zum Zitat Bojjireddy, S., Chun, S.A., Geller, J.: Machine learning approach to detect fake news, misinformation in covid-19 pandemic. In: DG. O2021: The 22nd Annual International Conference on Digital Government Research, pp. 575–578 (2021) Bojjireddy, S., Chun, S.A., Geller, J.: Machine learning approach to detect fake news, misinformation in covid-19 pandemic. In: DG. O2021: The 22nd Annual International Conference on Digital Government Research, pp. 575–578 (2021)
8.
Zurück zum Zitat Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020) Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
9.
Zurück zum Zitat Cao, Y., et al.: Can large language models detect misinformation in scientific news reporting? arXiv preprint arXiv:2402.14268 (2024) Cao, Y., et al.: Can large language models detect misinformation in scientific news reporting? arXiv preprint arXiv:​2402.​14268 (2024)
10.
Zurück zum Zitat Chae, Y., Davidson, T.: Large language models for text classification: from zero-shot learning to fine-tuning. Open Science Foundation (2023) Chae, Y., Davidson, T.: Large language models for text classification: from zero-shot learning to fine-tuning. Open Science Foundation (2023)
11.
Zurück zum Zitat Chang, Y., et al.: A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 15(3), 1–45 (2024)CrossRef Chang, Y., et al.: A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 15(3), 1–45 (2024)CrossRef
13.
Zurück zum Zitat Chen, X., et al.: How robust is GPT-3.5 to predecessors? A comprehensive study on language understanding tasks. arXiv preprint arXiv:2303.00293 (2023) Chen, X., et al.: How robust is GPT-3.5 to predecessors? A comprehensive study on language understanding tasks. arXiv preprint arXiv:​2303.​00293 (2023)
14.
Zurück zum Zitat Chung, H.W., et al.: Scaling instruction-finetuned language models. J. Mach. Learn. Res. 25(70), 1–53 (2024) Chung, H.W., et al.: Scaling instruction-finetuned language models. J. Mach. Learn. Res. 25(70), 1–53 (2024)
16.
Zurück zum Zitat Di Sotto, S., Viviani, M.: Health misinformation detection in the social web: an overview and a data science approach. Int. J. Environ. Res. Public Health 19(4), 2173 (2022)CrossRef Di Sotto, S., Viviani, M.: Health misinformation detection in the social web: an overview and a data science approach. Int. J. Environ. Res. Public Health 19(4), 2173 (2022)CrossRef
17.
Zurück zum Zitat Eastin, M.S.: Credibility assessments of online health information: the effects of source expertise and knowledge of content. J. Comput.-Mediated Commun. 6(4), JCMC643 (2001) Eastin, M.S.: Credibility assessments of online health information: the effects of source expertise and knowledge of content. J. Comput.-Mediated Commun. 6(4), JCMC643 (2001)
18.
Zurück zum Zitat Freeman, K.S., Spyridakis, J.H.: An examination of factors that affect the credibility of online health information. Tech. Commun. 51(2), 239–263 (2004) Freeman, K.S., Spyridakis, J.H.: An examination of factors that affect the credibility of online health information. Tech. Commun. 51(2), 239–263 (2004)
19.
Zurück zum Zitat Ghenai, A., Mejova, Y.: Catching zika fever: application of crowdsourcing and machine learning for tracking health misinformation on twitter. arXiv preprint arXiv:1707.03778 (2017) Ghenai, A., Mejova, Y.: Catching zika fever: application of crowdsourcing and machine learning for tracking health misinformation on twitter. arXiv preprint arXiv:​1707.​03778 (2017)
21.
Zurück zum Zitat Gül, İ., Lebret, R., Aberer, K.: Stance detection on social media with fine-tuned large language models. arXiv preprint arXiv:2404.12171 (2024) Gül, İ., Lebret, R., Aberer, K.: Stance detection on social media with fine-tuned large language models. arXiv preprint arXiv:​2404.​12171 (2024)
22.
Zurück zum Zitat Hayawi, K., Shahriar, S., Serhani, M.A., Taleb, I., Mathew, S.S.: Anti-vax: a novel twitter dataset for covid-19 vaccine misinformation detection. Public Health 203, 23–30 (2022)CrossRef Hayawi, K., Shahriar, S., Serhani, M.A., Taleb, I., Mathew, S.S.: Anti-vax: a novel twitter dataset for covid-19 vaccine misinformation detection. Public Health 203, 23–30 (2022)CrossRef
23.
Zurück zum Zitat Ilie, V.I., Truică, C.O., Apostol, E.S., Paschke, A.: Context-aware misinformation detection: a benchmark of deep learning architectures using word embeddings. IEEE Access 9, 162122–162146 (2021)CrossRef Ilie, V.I., Truică, C.O., Apostol, E.S., Paschke, A.: Context-aware misinformation detection: a benchmark of deep learning architectures using word embeddings. IEEE Access 9, 162122–162146 (2021)CrossRef
24.
Zurück zum Zitat Jafri, F.A., Rauniyar, K., Thapa, S., Siddiqui, M.A., Khushi, M., Naseem, U.: Chunav: analyzing Hindi hate speech and targeted groups in Indian election discourse. ACM Trans. Asian Low-Resour. Lang. Inf. Process. (2024) Jafri, F.A., Rauniyar, K., Thapa, S., Siddiqui, M.A., Khushi, M., Naseem, U.: Chunav: analyzing Hindi hate speech and targeted groups in Indian election discourse. ACM Trans. Asian Low-Resour. Lang. Inf. Process. (2024)
25.
Zurück zum Zitat Jafri, F.A., Siddiqui, M.A., Thapa, S., Rauniyar, K., Naseem, U., Razzak, I.: Uncovering political hate speech during Indian election campaign: a new low-resource dataset and baselines. arXiv preprint arXiv:2306.14764 (2023) Jafri, F.A., Siddiqui, M.A., Thapa, S., Rauniyar, K., Naseem, U., Razzak, I.: Uncovering political hate speech during Indian election campaign: a new low-resource dataset and baselines. arXiv preprint arXiv:​2306.​14764 (2023)
27.
Zurück zum Zitat Joshi, G., et al.: Explainable misinformation detection across multiple social media platforms. IEEE Access 11, 23634–23646 (2023)CrossRef Joshi, G., et al.: Explainable misinformation detection across multiple social media platforms. IEEE Access 11, 23634–23646 (2023)CrossRef
28.
Zurück zum Zitat Laban, P., et al.: LLMs as factual reasoners: insights from existing benchmarks and beyond. arXiv preprint arXiv:2305.14540 (2023) Laban, P., et al.: LLMs as factual reasoners: insights from existing benchmarks and beyond. arXiv preprint arXiv:​2305.​14540 (2023)
29.
Zurück zum Zitat Lemire, M., Paré, G., Sicotte, C., Harvey, C.: Determinants of internet use as a preferred source of information on personal health. Int. J. Med. Informatics 77(11), 723–734 (2008)CrossRef Lemire, M., Paré, G., Sicotte, C., Harvey, C.: Determinants of internet use as a preferred source of information on personal health. Int. J. Med. Informatics 77(11), 723–734 (2008)CrossRef
30.
Zurück zum Zitat Li, G., et al.: Re-search for the truth: multi-round retrieval-augmented large language models are strong fake news detectors. arXiv preprint arXiv:2403.09747 (2024) Li, G., et al.: Re-search for the truth: multi-round retrieval-augmented large language models are strong fake news detectors. arXiv preprint arXiv:​2403.​09747 (2024)
31.
Zurück zum Zitat Li, Y., Wu, Y., Li, J., Liu, S.: Prompting large language models for zero-shot domain adaptation in speech recognition. In: 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 1–8. IEEE (2023) Li, Y., Wu, Y., Li, J., Liu, S.: Prompting large language models for zero-shot domain adaptation in speech recognition. In: 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 1–8. IEEE (2023)
32.
Zurück zum Zitat Ling, C., et al.: Beyond one-model-fits-all: a survey of domain specialization for large language models. arXiv preprint arXiv:2305.18703 (2023) Ling, C., et al.: Beyond one-model-fits-all: a survey of domain specialization for large language models. arXiv preprint arXiv:​2305.​18703 (2023)
33.
Zurück zum Zitat Lippmann, P., Spaan, M., Yang, J.: Exploring LLMs as a source of targeted synthetic textual data to minimize high confidence misclassifications. arXiv preprint arXiv:2403.17860 (2024) Lippmann, P., Spaan, M., Yang, J.: Exploring LLMs as a source of targeted synthetic textual data to minimize high confidence misclassifications. arXiv preprint arXiv:​2403.​17860 (2024)
34.
Zurück zum Zitat Liu, Y., et al.: Understanding LLMs: a comprehensive overview from training to inference. arXiv preprint arXiv:2401.02038 (2024) Liu, Y., et al.: Understanding LLMs: a comprehensive overview from training to inference. arXiv preprint arXiv:​2401.​02038 (2024)
35.
Zurück zum Zitat Liu, Y., Yu, K., Wu, X., Qing, L., Peng, Y.: Analysis and detection of health-related misinformation on Chinese social media. IEEE Access 7, 154480–154489 (2019)CrossRef Liu, Y., Yu, K., Wu, X., Qing, L., Peng, Y.: Analysis and detection of health-related misinformation on Chinese social media. IEEE Access 7, 154480–154489 (2019)CrossRef
36.
Zurück zum Zitat Liu, Z., Liu, B., Thompson, P., Yang, K., Jain, R., Ananiadou, S.: Conspemollm: conspiracy theory detection using an emotion-based large language model. arXiv preprint arXiv:2403.06765 (2024) Liu, Z., Liu, B., Thompson, P., Yang, K., Jain, R., Ananiadou, S.: Conspemollm: conspiracy theory detection using an emotion-based large language model. arXiv preprint arXiv:​2403.​06765 (2024)
37.
Zurück zum Zitat Lu, Y., et al.: Collective human behavior in cascading system: discovery, modeling and applications. In: 2018 IEEE International Conference on Data Mining (ICDM), pp. 297–306. IEEE (2018) Lu, Y., et al.: Collective human behavior in cascading system: discovery, modeling and applications. In: 2018 IEEE International Conference on Data Mining (ICDM), pp. 297–306. IEEE (2018)
38.
Zurück zum Zitat Naeem, S., Ali, A., Anam, S., Ahmed, M.M.: An unsupervised machine learning algorithms: comprehensive review. Int. J. Comput. Digit. Syst. (2023) Naeem, S., Ali, A., Anam, S., Ahmed, M.M.: An unsupervised machine learning algorithms: comprehensive review. Int. J. Comput. Digit. Syst. (2023)
39.
Zurück zum Zitat Nasir, A., Sharma, A., Jaidka, K.: LLMs and finetuning: benchmarking cross-domain performance for hate speech detection. arXiv preprint arXiv:2310.18964 (2023) Nasir, A., Sharma, A., Jaidka, K.: LLMs and finetuning: benchmarking cross-domain performance for hate speech detection. arXiv preprint arXiv:​2310.​18964 (2023)
40.
Zurück zum Zitat Pahune, S., Chandrasekharan, M.: Several categories of large language models (LLMs): a short survey. arXiv preprint arXiv:2307.10188 (2023) Pahune, S., Chandrasekharan, M.: Several categories of large language models (LLMs): a short survey. arXiv preprint arXiv:​2307.​10188 (2023)
41.
Zurück zum Zitat Pan, W., Liu, D., Fang, J.: An examination of factors contributing to the acceptance of online health misinformation. Front. Psychol. 12, 630268 (2021)CrossRef Pan, W., Liu, D., Fang, J.: An examination of factors contributing to the acceptance of online health misinformation. Front. Psychol. 12, 630268 (2021)CrossRef
42.
Zurück zum Zitat Parnami, A., Lee, M.: Learning from few examples: a summary of approaches to few-shot learning. arXiv preprint arXiv:2203.04291 (2022) Parnami, A., Lee, M.: Learning from few examples: a summary of approaches to few-shot learning. arXiv preprint arXiv:​2203.​04291 (2022)
44.
Zurück zum Zitat Paynter, J., et al.: Evaluation of a template for countering misinformation-real-world autism treatment myth debunking. PLoS ONE 14(1), e0210746 (2019)CrossRef Paynter, J., et al.: Evaluation of a template for countering misinformation-real-world autism treatment myth debunking. PLoS ONE 14(1), e0210746 (2019)CrossRef
45.
Zurück zum Zitat Qaiser, S., Ali, R.: Text mining: use of TF-IDF to examine the relevance of words to documents. Int. J. Comput. Appl. 181(1), 25–29 (2018) Qaiser, S., Ali, R.: Text mining: use of TF-IDF to examine the relevance of words to documents. Int. J. Comput. Appl. 181(1), 25–29 (2018)
46.
Zurück zum Zitat Rathje, S., Mirea, D.M., Sucholutsky, I., Marjieh, R., Robertson, C., Van Bavel, J.J.: GPT is an effective tool for multilingual psychological text analysis (2023) Rathje, S., Mirea, D.M., Sucholutsky, I., Marjieh, R., Robertson, C., Van Bavel, J.J.: GPT is an effective tool for multilingual psychological text analysis (2023)
47.
Zurück zum Zitat Safarnejad, L., Xu, Q., Ge, Y., Chen, S.: A multiple feature category data mining and machine learning approach to characterize and detect health misinformation on social media. IEEE Internet Comput. 25(5), 43–51 (2021)CrossRef Safarnejad, L., Xu, Q., Ge, Y., Chen, S.: A multiple feature category data mining and machine learning approach to characterize and detect health misinformation on social media. IEEE Internet Comput. 25(5), 43–51 (2021)CrossRef
48.
Zurück zum Zitat Santu, S.K.K., Feng, D.: Teler: a general taxonomy of LLM prompts for benchmarking complex tasks. arXiv preprint arXiv:2305.11430 (2023) Santu, S.K.K., Feng, D.: Teler: a general taxonomy of LLM prompts for benchmarking complex tasks. arXiv preprint arXiv:​2305.​11430 (2023)
49.
Zurück zum Zitat Savelka, J., Ashley, K.D., Gray, M.A., Westermann, H., Xu, H.: Can GPT-4 support analysis of textual data in tasks requiring highly specialized domain expertise? arXiv preprint arXiv:2306.13906 (2023) Savelka, J., Ashley, K.D., Gray, M.A., Westermann, H., Xu, H.: Can GPT-4 support analysis of textual data in tasks requiring highly specialized domain expertise? arXiv preprint arXiv:​2306.​13906 (2023)
50.
Zurück zum Zitat Sbaffi, L., Rowley, J.: Trust and credibility in web-based health information: a review and agenda for future research. J. Med. Internet Res. 19(6), e218 (2017)CrossRef Sbaffi, L., Rowley, J.: Trust and credibility in web-based health information: a review and agenda for future research. J. Med. Internet Res. 19(6), e218 (2017)CrossRef
51.
Zurück zum Zitat Sen, T., Das, A., Sen, M.: Hatetinyllm: hate speech detection using tiny large language models. arXiv preprint arXiv:2405.01577 (2024) Sen, T., Das, A., Sen, M.: Hatetinyllm: hate speech detection using tiny large language models. arXiv preprint arXiv:​2405.​01577 (2024)
52.
Zurück zum Zitat Shah, S.B., et al.: Navigating the web of disinformation and misinformation: large language models as double-edged swords. IEEE Access (2024) Shah, S.B., et al.: Navigating the web of disinformation and misinformation: large language models as double-edged swords. IEEE Access (2024)
53.
Zurück zum Zitat Shiwakoti, S., Thapa, S., Rauniyar, K., Shah, A., Bhandari, A., Naseem, U.: Analyzing the dynamics of climate change discourse on twitter: a new annotated corpus and multi-aspect classification. In: Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pp. 984–994 (2024) Shiwakoti, S., Thapa, S., Rauniyar, K., Shah, A., Bhandari, A., Naseem, U.: Analyzing the dynamics of climate change discourse on twitter: a new annotated corpus and multi-aspect classification. In: Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pp. 984–994 (2024)
54.
Zurück zum Zitat Sinaga, K.P., Yang, M.S.: Unsupervised k-means clustering algorithm. IEEE Access 8, 80716–80727 (2020)CrossRef Sinaga, K.P., Yang, M.S.: Unsupervised k-means clustering algorithm. IEEE Access 8, 80716–80727 (2020)CrossRef
55.
Zurück zum Zitat Søe, S.O.: A unified account of information, misinformation, and disinformation. Synthese 198(6), 5929–5949 (2021)MathSciNetCrossRef Søe, S.O.: A unified account of information, misinformation, and disinformation. Synthese 198(6), 5929–5949 (2021)MathSciNetCrossRef
56.
Zurück zum Zitat Swire-Thompson, B., Lazer, D., et al.: Public health and online misinformation: challenges and recommendations. Annu. Rev. Public Health 41(1), 433–451 (2020)CrossRef Swire-Thompson, B., Lazer, D., et al.: Public health and online misinformation: challenges and recommendations. Annu. Rev. Public Health 41(1), 433–451 (2020)CrossRef
57.
58.
Zurück zum Zitat Thapa, S., Rauniyar, K., Shiwakoti, S., Poudel, S., Naseem, U., Nasim, M.: Nehate: large-scale annotated data shedding light on hate speech in Nepali local election discourse. In: ECAI 2023, pp. 2346–2353. IOS Press (2023) Thapa, S., Rauniyar, K., Shiwakoti, S., Poudel, S., Naseem, U., Nasim, M.: Nehate: large-scale annotated data shedding light on hate speech in Nepali local election discourse. In: ECAI 2023, pp. 2346–2353. IOS Press (2023)
60.
Zurück zum Zitat Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017) Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
61.
Zurück zum Zitat Wang, Y., McKee, M., Torbica, A., Stuckler, D.: Systematic literature review on the spread of health-related misinformation on social media. Soc. Sci. Med. 240, 112552 (2019)CrossRef Wang, Y., McKee, M., Torbica, A., Stuckler, D.: Systematic literature review on the spread of health-related misinformation on social media. Soc. Sci. Med. 240, 112552 (2019)CrossRef
62.
63.
Zurück zum Zitat Warner, E.L., Basen-Engquist, K.M., Badger, T.A., Crane, T.E., Raber-Ramsey, M.: The online cancer nutrition misinformation: a framework of behavior change based on exposure to cancer nutrition misinformation. Cancer 128(13), 2540–2548 (2022)CrossRef Warner, E.L., Basen-Engquist, K.M., Badger, T.A., Crane, T.E., Raber-Ramsey, M.: The online cancer nutrition misinformation: a framework of behavior change based on exposure to cancer nutrition misinformation. Cancer 128(13), 2540–2548 (2022)CrossRef
64.
Zurück zum Zitat Wright, C., et al.: Effects of brief exposure to misinformation about e-cigarette harms on twitter: a randomised controlled experiment. BMJ Open 11(9), e045445 (2021)CrossRef Wright, C., et al.: Effects of brief exposure to misinformation about e-cigarette harms on twitter: a randomised controlled experiment. BMJ Open 11(9), e045445 (2021)CrossRef
65.
66.
Zurück zum Zitat Yang, J., et al.: Harnessing the power of LLMs in practice: a survey on chatgpt and beyond. ACM Trans. Knowl. Discov. Data 18(6), 1–32 (2024)CrossRef Yang, J., et al.: Harnessing the power of LLMs in practice: a survey on chatgpt and beyond. ACM Trans. Knowl. Discov. Data 18(6), 1–32 (2024)CrossRef
67.
Zurück zum Zitat Zhao, Y., Da, J., Yan, J.: Detecting health misinformation in online health communities: incorporating behavioral features into machine learning based approaches. Inf. Process. Manag. 58(1), 102390 (2021)CrossRef Zhao, Y., Da, J., Yan, J.: Detecting health misinformation in online health communities: incorporating behavioral features into machine learning based approaches. Inf. Process. Manag. 58(1), 102390 (2021)CrossRef
Metadaten
Titel
Did You Tell a Deadly Lie? Evaluating Large Language Models for Health Misinformation Identification
verfasst von
Surendrabikram Thapa
Kritesh Rauniyar
Hariram Veeramani
Aditya Shah
Imran Razzak
Usman Naseem
Copyright-Jahr
2025
Verlag
Springer Nature Singapore
DOI
https://doi.org/10.1007/978-981-96-0576-7_29