Skip to main content
Top

08-02-2025 | Regular Paper

K-Bloom: unleashing the power of pre-trained language models in extracting knowledge graph with predefined relations

Authors: Trung Vo, Son T. Luu, Le-Minh Nguyen

Published in: Knowledge and Information Systems

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Pre-trained language models have become popular in natural language processing tasks, but their inner workings and knowledge acquisition processes remain unclear. To address this issue, we introduce K-Bloom—a refined search-and-score mechanism tailored for seed-guided exploration in pre-trained language models, ensuring both accuracy and efficiency in extracting relevant entity pairs and relationships. Specifically, our crawling procedure is divided into two sub-tasks. Using a few seed entity pairs to minimize the need for extensive manual effort or predefined knowledge, we expand the knowledge graph with new entity pairs around these seeds. To evaluate the effectiveness of our proposed model, we conducted experiments on two datasets that cover the general domain. Our resulting knowledge graphs serve as symbolic representations of the source pre-trained language models, providing valuable insights into their knowledge capacities. Additionally, they enhance our understanding of the pre-trained language models’ capabilities when automatically evaluated on large language models. The experimental results demonstrate that our method outperforms the baseline approach by up to 5.62% in terms of accuracy in various settings of the two benchmarks. We believe that our approach offers a scalable and flexible solution for knowledge graph construction and can be applied to different domains and novel contexts.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota. https://doi.org/10.18653/v1/N19-1423. https://aclanthology.org/N19-1423 Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota. https://​doi.​org/​10.​18653/​v1/​N19-1423. https://​aclanthology.​org/​N19-1423
2.
go back to reference Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877–1901 Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877–1901
3.
go back to reference Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21(1):5485–5551MathSciNet Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21(1):5485–5551MathSciNet
4.
go back to reference Petroni F, Rocktäschel T, Riedel S, Lewis P, Bakhtin A, Wu Y, Miller A (2019) Language models as knowledge bases? In: Inui K, Jiang J, Ng V, Wan X (eds) Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp 2463–2473. Association for Computational Linguistics, Hong Kong, China. https://doi.org/10.18653/v1/D19-1250. https://aclanthology.org/D19-1250 Petroni F, Rocktäschel T, Riedel S, Lewis P, Bakhtin A, Wu Y, Miller A (2019) Language models as knowledge bases? In: Inui K, Jiang J, Ng V, Wan X (eds) Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP), pp 2463–2473. Association for Computational Linguistics, Hong Kong, China. https://​doi.​org/​10.​18653/​v1/​D19-1250. https://​aclanthology.​org/​D19-1250
5.
go back to reference Hogan A, Blomqvist E, Cochez M, d’Amato C, Melo GD, Gutierrez C, Kirrane S, Gayo JEL, Navigli R, Neumaier S et al (2021) Knowledge graphs. ACM Comput Surv (CSUR) 54(4):1–37CrossRefMATH Hogan A, Blomqvist E, Cochez M, d’Amato C, Melo GD, Gutierrez C, Kirrane S, Gayo JEL, Navigli R, Neumaier S et al (2021) Knowledge graphs. ACM Comput Surv (CSUR) 54(4):1–37CrossRefMATH
6.
go back to reference Bollacker K, Evans C, Paritosh P, Sturge T, Taylor J (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD international conference on management of data, pp 1247–1250 Bollacker K, Evans C, Paritosh P, Sturge T, Taylor J (2008) Freebase: a collaboratively created graph database for structuring human knowledge. In: Proceedings of the 2008 ACM SIGMOD international conference on management of data, pp 1247–1250
7.
8.
go back to reference Suchanek FM, Kasneci G, Weikum G (2008) YAGO: a large ontology from Wikipedia and wordnet. J Web Semant 6(3):203–217CrossRefMATH Suchanek FM, Kasneci G, Weikum G (2008) YAGO: a large ontology from Wikipedia and wordnet. J Web Semant 6(3):203–217CrossRefMATH
9.
go back to reference Deng J, Chen C, Huang X, Chen W, Cheng L (2023) Research on the construction of event logic knowledge graph of supply chain management. Adv Eng Inform 56:101921CrossRefMATH Deng J, Chen C, Huang X, Chen W, Cheng L (2023) Research on the construction of event logic knowledge graph of supply chain management. Adv Eng Inform 56:101921CrossRefMATH
10.
go back to reference Guo Q, Zhuang F, Qin C, Zhu H, Xie X, Xiong H, He Q (2020) A survey on knowledge graph-based recommender systems. IEEE Trans Knowl Data Eng 34(8):3549–3568CrossRefMATH Guo Q, Zhuang F, Qin C, Zhu H, Xie X, Xiong H, He Q (2020) A survey on knowledge graph-based recommender systems. IEEE Trans Knowl Data Eng 34(8):3549–3568CrossRefMATH
11.
go back to reference Liu J, Liu A, Lu X, Welleck S, West P, Bras RL, Choi Y, Hajishirzi H (2021) Generated knowledge prompting for commonsense reasoning. arXiv preprint arXiv:2110.08387 Liu J, Liu A, Lu X, Welleck S, West P, Bras RL, Choi Y, Hajishirzi H (2021) Generated knowledge prompting for commonsense reasoning. arXiv preprint arXiv:​2110.​08387
12.
go back to reference Deng S, Wang C, Li Z, Zhang N, Dai Z, Chen H, Xiong F, Yan M, Chen Q, Chen M et al (2023) Construction and applications of billion-scale pre-trained multimodal business knowledge graph. In: 2023 IEEE 39th international conference on data engineering (ICDE). IEEE, pp 2988–3002 Deng S, Wang C, Li Z, Zhang N, Dai Z, Chen H, Xiong F, Yan M, Chen Q, Chen M et al (2023) Construction and applications of billion-scale pre-trained multimodal business knowledge graph. In: 2023 IEEE 39th international conference on data engineering (ICDE). IEEE, pp 2988–3002
13.
go back to reference Mao X, Sun H, Zhu X, Li J (2022) Financial fraud detection using the related-party transaction knowledge graph. Procedia Comput Sci 199:733–740CrossRefMATH Mao X, Sun H, Zhu X, Li J (2022) Financial fraud detection using the related-party transaction knowledge graph. Procedia Comput Sci 199:733–740CrossRefMATH
14.
go back to reference Chiu JP, Nichols E (2016) Named entity recognition with bidirectional LSTM-CNNs. Trans Assoc Comput Linguist 4:357–370CrossRefMATH Chiu JP, Nichols E (2016) Named entity recognition with bidirectional LSTM-CNNs. Trans Assoc Comput Linguist 4:357–370CrossRefMATH
15.
go back to reference Zhou G, Su J, Zhang J, Zhang M (2005) Exploring various knowledge in relation extraction. In: Proceedings of the 43rd annual meeting of the association for computational linguistics (acl’05), pp 427–434 Zhou G, Su J, Zhang J, Zhang M (2005) Exploring various knowledge in relation extraction. In: Proceedings of the 43rd annual meeting of the association for computational linguistics (acl’05), pp 427–434
16.
go back to reference Huang X, Zhang J, Li D, Li P (2019) Knowledge graph embedding based question answering. In: Proceedings of the twelfth ACM international conference on web search and data mining, pp 105–113 Huang X, Zhang J, Li D, Li P (2019) Knowledge graph embedding based question answering. In: Proceedings of the twelfth ACM international conference on web search and data mining, pp 105–113
17.
go back to reference Yu L, Tian F, Kuang P, Zhou F (2024) Amplifying diversity and quality in commonsense knowledge graph completion (student abstract). In: Proceedings of the AAAI conference on artificial intelligence, vol 38, pp 23699–23700 Yu L, Tian F, Kuang P, Zhou F (2024) Amplifying diversity and quality in commonsense knowledge graph completion (student abstract). In: Proceedings of the AAAI conference on artificial intelligence, vol 38, pp 23699–23700
19.
go back to reference Wang H, Zhao M, Xie X, Li W, Guo M (2019) Knowledge graph convolutional networks for recommender systems. In: The world wide web conference, pp 3307–3313 Wang H, Zhao M, Xie X, Li W, Guo M (2019) Knowledge graph convolutional networks for recommender systems. In: The world wide web conference, pp 3307–3313
20.
go back to reference Jin W, Zhao B, Yu H, Tao X, Yin R, Liu G (2023) Improving embedded knowledge graph multi-hop question answering by introducing relational chain reasoning. Data Min Knowl Disc 37(1):255–288CrossRef Jin W, Zhao B, Yu H, Tao X, Yin R, Liu G (2023) Improving embedded knowledge graph multi-hop question answering by introducing relational chain reasoning. Data Min Knowl Disc 37(1):255–288CrossRef
22.
go back to reference Ghazvininejad M, Levy O, Liu Y, Zettlemoyer L (2019) Mask-predict: Parallel decoding of conditional masked language models. In: Inui K, Jiang J, Ng V, Wan X (eds.) Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, pp 6112–6121. https://doi.org/10.18653/v1/D19-1633. https://aclanthology.org/D19-1633 Ghazvininejad M, Levy O, Liu Y, Zettlemoyer L (2019) Mask-predict: Parallel decoding of conditional masked language models. In: Inui K, Jiang J, Ng V, Wan X (eds.) Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, pp 6112–6121. https://​doi.​org/​10.​18653/​v1/​D19-1633. https://​aclanthology.​org/​D19-1633
23.
go back to reference Yang M, Chen K, Sun S, Han Z, Kong L, Meng Q (2021) A pattern driven graph ranking approach to attribute extraction for knowledge graph. IEEE Trans Ind Inf 18(2):1250–1259CrossRefMATH Yang M, Chen K, Sun S, Han Z, Kong L, Meng Q (2021) A pattern driven graph ranking approach to attribute extraction for knowledge graph. IEEE Trans Ind Inf 18(2):1250–1259CrossRefMATH
24.
go back to reference Yu H, Li H, Mao D, Cai Q (2020) A relationship extraction method for domain knowledge graph construction. World Wide Web 23:735–753CrossRefMATH Yu H, Li H, Mao D, Cai Q (2020) A relationship extraction method for domain knowledge graph construction. World Wide Web 23:735–753CrossRefMATH
25.
go back to reference Deng W, Guo P, Yang J (2019) Medical entity extraction and knowledge graph construction. In: 2019 16th international computer conference on wavelet active media technology and information processing. IEEE, pp 41–44 Deng W, Guo P, Yang J (2019) Medical entity extraction and knowledge graph construction. In: 2019 16th international computer conference on wavelet active media technology and information processing. IEEE, pp 41–44
26.
go back to reference Bollacker K, Cook R, Tufts P (2007) Freebase: a shared database of structured general human knowledge. AAAI 7:1962–1963MATH Bollacker K, Cook R, Tufts P (2007) Freebase: a shared database of structured general human knowledge. AAAI 7:1962–1963MATH
27.
go back to reference Fellbaum C (1998) Wordnet: An electronic lexical database, vol 2. MIT Press, Cambridge, pp 678–686CrossRefMATH Fellbaum C (1998) Wordnet: An electronic lexical database, vol 2. MIT Press, Cambridge, pp 678–686CrossRefMATH
28.
go back to reference Vrandečić D (2012) Wikidata: a new platform for collaborative data collection. In: Proceedings of the 21st international conference on world wide web, pp 1063–1064 Vrandečić D (2012) Wikidata: a new platform for collaborative data collection. In: Proceedings of the 21st international conference on world wide web, pp 1063–1064
29.
go back to reference Yates A, Banko M, Broadhead M, Cafarella MJ, Etzioni O, Soderland S (2007) Textrunner: open information extraction on the web. In: Proceedings of human language technologies: the annual conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pp 25–26 Yates A, Banko M, Broadhead M, Cafarella MJ, Etzioni O, Soderland S (2007) Textrunner: open information extraction on the web. In: Proceedings of human language technologies: the annual conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pp 25–26
30.
go back to reference Fader A, Soderland S, Etzioni O (2011) Identifying relations for open information extraction. In: Proceedings of the 2011 conference on empirical methods in natural language processing, pp 1535–1545 Fader A, Soderland S, Etzioni O (2011) Identifying relations for open information extraction. In: Proceedings of the 2011 conference on empirical methods in natural language processing, pp 1535–1545
31.
go back to reference Schmitz M, Soderland S, Bart R, Etzioni O et al (2012) Open language learning for information extraction. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pp 523–534 Schmitz M, Soderland S, Bart R, Etzioni O et al (2012) Open language learning for information extraction. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pp 523–534
32.
go back to reference Suchanek FM, Kasneci G, Weikum G (2007) Yago: a core of semantic knowledge. In: Proceedings of the 16th international conference on world wide web, pp 697–706 Suchanek FM, Kasneci G, Weikum G (2007) Yago: a core of semantic knowledge. In: Proceedings of the 16th international conference on world wide web, pp 697–706
33.
go back to reference Alivanistos D, Santamaría SB, Cochez M, Kalo J-C, Krieken E, Thanapalasingam T (2022) Prompting as probing: using language models for knowledge base construction. arXiv preprint arXiv:2208.11057 Alivanistos D, Santamaría SB, Cochez M, Kalo J-C, Krieken E, Thanapalasingam T (2022) Prompting as probing: using language models for knowledge base construction. arXiv preprint arXiv:​2208.​11057
34.
go back to reference Jiang Z, Xu FF, Araki J, Neubig G (2020) How can we know what language models know? Trans Assoc Comput Linguist 8:423–438CrossRef Jiang Z, Xu FF, Araki J, Neubig G (2020) How can we know what language models know? Trans Assoc Comput Linguist 8:423–438CrossRef
37.
go back to reference Elazar Y, Kassner N, Ravfogel S, Feder A, Ravichander A, Mosbach M, Belinkov Y, Schütze H, Goldberg Y (2022) Measuring causal effects of data statistics on language model’s ‘factual’ predictions. arXiv preprint arXiv:2207.14251 Elazar Y, Kassner N, Ravfogel S, Feder A, Ravichander A, Mosbach M, Belinkov Y, Schütze H, Goldberg Y (2022) Measuring causal effects of data statistics on language model’s ‘factual’ predictions. arXiv preprint arXiv:​2207.​14251
38.
go back to reference Elazar Y, Kassner N, Ravfogel S, Ravichander A, Hovy E, Schütze H, Goldberg Y (2021) Measuring and improving consistency in pretrained language models. Trans Assoc Comput Linguist 9:1012–1031CrossRef Elazar Y, Kassner N, Ravfogel S, Ravichander A, Hovy E, Schütze H, Goldberg Y (2021) Measuring and improving consistency in pretrained language models. Trans Assoc Comput Linguist 9:1012–1031CrossRef
40.
go back to reference Kandpal N, Deng H, Roberts A, Wallace E, Raffel C (2023) Large language models struggle to learn long-tail knowledge. In: International conference on machine learning. PMLR, pp 15696–15707 Kandpal N, Deng H, Roberts A, Wallace E, Raffel C (2023) Large language models struggle to learn long-tail knowledge. In: International conference on machine learning. PMLR, pp 15696–15707
43.
go back to reference Zhang T, Kishore V, Wu F, Weinberger KQ, Artzi Y (2019) BERTScore: evaluating text generation with BERT. arXiv preprint arXiv:1904.09675 Zhang T, Kishore V, Wu F, Weinberger KQ, Artzi Y (2019) BERTScore: evaluating text generation with BERT. arXiv preprint arXiv:​1904.​09675
44.
go back to reference Papineni K, Roukos S, Ward T, Zhu W-J (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the association for computational linguistics, pp 311–318 Papineni K, Roukos S, Ward T, Zhu W-J (2002) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the association for computational linguistics, pp 311–318
45.
go back to reference Kusner M, Sun Y, Kolkin N, Weinberger K (2015) From word embeddings to document distances. In: International conference on machine learning. PMLR, pp 957–966 Kusner M, Sun Y, Kolkin N, Weinberger K (2015) From word embeddings to document distances. In: International conference on machine learning. PMLR, pp 957–966
46.
go back to reference Rubner Y, Tomasi C, Guibas LJ (2000) The earth mover’s distance as a metric for image retrieval. Int J Comput Vis 40:99–121CrossRefMATH Rubner Y, Tomasi C, Guibas LJ (2000) The earth mover’s distance as a metric for image retrieval. Int J Comput Vis 40:99–121CrossRefMATH
47.
go back to reference Wei C, Wang B, Kuo C-CJ (2023) Synwmd: Syntax-aware word mover’s distance for sentence similarity evaluation. Pattern Recogn Lett 170:48–55CrossRefMATH Wei C, Wang B, Kuo C-CJ (2023) Synwmd: Syntax-aware word mover’s distance for sentence similarity evaluation. Pattern Recogn Lett 170:48–55CrossRefMATH
48.
go back to reference Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, vol 26 Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems, vol 26
49.
go back to reference Radford A, Narasimhan K, Salimans T, Sutskever I, et al. (2018) Improving language understanding by generative pre-training Radford A, Narasimhan K, Salimans T, Sutskever I, et al. (2018) Improving language understanding by generative pre-training
50.
go back to reference Vrandečić D, Krötzsch M (2014) Wikidata: a free collaborative knowledgebase. Commun ACM 57(10):78–85CrossRefMATH Vrandečić D, Krötzsch M (2014) Wikidata: a free collaborative knowledgebase. Commun ACM 57(10):78–85CrossRefMATH
51.
go back to reference Mendes P, Jakob M, Bizer C (2012) DBpedia: a multilingual cross-domain knowledge base. In: Calzolari N, Choukri K, Declerck T, Doğan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the eighth international conference on language resources and evaluation (LREC’12). European Language Resources Association (ELRA), Istanbul, Turkey, pp 1813–1817 Mendes P, Jakob M, Bizer C (2012) DBpedia: a multilingual cross-domain knowledge base. In: Calzolari N, Choukri K, Declerck T, Doğan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the eighth international conference on language resources and evaluation (LREC’12). European Language Resources Association (ELRA), Istanbul, Turkey, pp 1813–1817
52.
go back to reference Vo D-T, Bagheri E (2017) Open information extraction. Encyclopedia with semantic computing and Robotic intelligence 1(01):1630003 Vo D-T, Bagheri E (2017) Open information extraction. Encyclopedia with semantic computing and Robotic intelligence 1(01):1630003
53.
go back to reference Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, Levy O, Lewis M, Zettlemoyer L, Stoyanov V (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:​1907.​11692
54.
go back to reference Speer R, Chin J, Havasi C (2017) Conceptnet 5.5: an open multilingual graph of general knowledge. In: Proceedings of the AAAI conference on artificial intelligence, vol 31 Speer R, Chin J, Havasi C (2017) Conceptnet 5.5: an open multilingual graph of general knowledge. In: Proceedings of the AAAI conference on artificial intelligence, vol 31
55.
go back to reference Yuan W, Neubig G, Liu P (2021) BARTScore: evaluating generated text as text generation. Adv Neural Inf Process Syst 34:27263–27277 Yuan W, Neubig G, Liu P (2021) BARTScore: evaluating generated text as text generation. Adv Neural Inf Process Syst 34:27263–27277
56.
go back to reference Touvron H, Lavril T, Izacard G, Martinet X, Lachaux M-A, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F et al (2023) LLaMA: open and efficient foundation language models. arXiv preprint arXiv:2302.13971 Touvron H, Lavril T, Izacard G, Martinet X, Lachaux M-A, Lacroix T, Rozière B, Goyal N, Hambro E, Azhar F et al (2023) LLaMA: open and efficient foundation language models. arXiv preprint arXiv:​2302.​13971
57.
go back to reference Jiang AQ, Sablayrolles A, Mensch A, Bamford C, Chaplot DS, Casas Ddl, Bressand F, Lengyel G, Lample G, Saulnier L et al (2023) Mistral 7b. arXiv preprint arXiv:2310.06825 Jiang AQ, Sablayrolles A, Mensch A, Bamford C, Chaplot DS, Casas Ddl, Bressand F, Lengyel G, Lample G, Saulnier L et al (2023) Mistral 7b. arXiv preprint arXiv:​2310.​06825
Metadata
Title
K-Bloom: unleashing the power of pre-trained language models in extracting knowledge graph with predefined relations
Authors
Trung Vo
Son T. Luu
Le-Minh Nguyen
Publication date
08-02-2025
Publisher
Springer London
Published in
Knowledge and Information Systems
Print ISSN: 0219-1377
Electronic ISSN: 0219-3116
DOI
https://doi.org/10.1007/s10115-025-02345-1

Premium Partner