Skip to main content
Top

27-04-2024 | Original Article

Applying Kumaraswamy distribution on stick-breaking process: a Dirichlet neural topic model approach

Authors: Jihong Ouyang, Teng Wang, Jingyue Cao, Yiming Wang

Published in: Neural Computing and Applications

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In recent years, neural topic modeling has increasingly raised extensive attention due to its capacity on generating coherent topics and flexible deep neural structures. However, the widely used Dirichlet distribution in shallow topic models is difficult to reparameterize. Therefore, most existing neural topic models assume the Gaussian as the prior of topic proportions for reparameterization. Gaussian distribution does not have the sparsity like Dirichlet distribution, which limits the model’s topic extraction ability. To address this issue, we propose a novel neural topic model approximating the Dirichlet prior with the reparameterizable Kumaraswamy distribution, namely Kumaraswamy Neural Topic Model (KNTM). Specifically, we adopted the stick-breaking process for posterior inference with the Kumaraswamy distribution as the base distribution. Besides, to capture the dependencies among topics, we propose a Kumaraswamy Recurrent Neural Topic Model (KRNTM) based on the recurrent stick-breaking construction to ensure that the model can still generate coherent topical words in high-dimensional topic space. We examined our method on five prevalent benchmark datasets over six Dirichlet-approximating neural topic models, among which KNTM has the lowest perplexity and KRNTM performance best on topic coherence and topic uniqueness. Qualitative analysis of the top topical words verifies that our proposed models can extract more semantically coherent topics compared with state-of-the-art models, further demonstrating our method’s effectiveness. This work contributes to the broader application of VAEs with Dirichlet priors.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Blei DM, Ng AY, Jordan MI (2003) Latent Dirichlet allocation. J Mach Learn Res 3:993–1022 Blei DM, Ng AY, Jordan MI (2003) Latent Dirichlet allocation. J Mach Learn Res 3:993–1022
2.
go back to reference Cheng X, Yan X, Lan Y, Guo J (2014) BTM: topic modeling over short texts. IEEE Trans Knowl Data Eng 26(12):2928–2941CrossRef Cheng X, Yan X, Lan Y, Guo J (2014) BTM: topic modeling over short texts. IEEE Trans Knowl Data Eng 26(12):2928–2941CrossRef
3.
go back to reference Wang Y, Tong Y, Shi D (2020) Federated latent Dirichlet allocation: a local differential privacy based framework. In: The thirty-fourth AAAI conference on artificial intelligence, AAAI 2020, the thirty-second innovative applications of artificial intelligence conference, IAAI 2020, the tenth AAAI symposium on educational advances in artificial intelligence, EAAI 2020, New York, NY, USA, February 7–12, 2020, pp 6283–6290 Wang Y, Tong Y, Shi D (2020) Federated latent Dirichlet allocation: a local differential privacy based framework. In: The thirty-fourth AAAI conference on artificial intelligence, AAAI 2020, the thirty-second innovative applications of artificial intelligence conference, IAAI 2020, the tenth AAAI symposium on educational advances in artificial intelligence, EAAI 2020, New York, NY, USA, February 7–12, 2020, pp 6283–6290
4.
go back to reference Zhou Q, Chen H, Zheng Y, Wang Z (2021) Evalda: Efficient evasion attacks towards latent Dirichlet allocation. In: Thirty-fifth AAAI conference on artificial intelligence, AAAI 2021, thirty-third conference on innovative applications of artificial intelligence, IAAI 2021, the eleventh symposium on educational advances in artificial intelligence, EAAI 2021, Virtual Event, February 2–9, 2021, pp 14602–14611 Zhou Q, Chen H, Zheng Y, Wang Z (2021) Evalda: Efficient evasion attacks towards latent Dirichlet allocation. In: Thirty-fifth AAAI conference on artificial intelligence, AAAI 2021, thirty-third conference on innovative applications of artificial intelligence, IAAI 2021, the eleventh symposium on educational advances in artificial intelligence, EAAI 2021, Virtual Event, February 2–9, 2021, pp 14602–14611
5.
go back to reference Cheevaprawatdomrong J, Schofield, A, Rutherford A (2022) More than words: collocation retokenization for latent Dirichlet allocation models. In: Muresan S, Nakov P, Villavicencio A (eds) Findings of the association for computational linguistics: ACL 2022, Dublin, Ireland, May 22–27, 2022, pp 2696–2704 Cheevaprawatdomrong J, Schofield, A, Rutherford A (2022) More than words: collocation retokenization for latent Dirichlet allocation models. In: Muresan S, Nakov P, Villavicencio A (eds) Findings of the association for computational linguistics: ACL 2022, Dublin, Ireland, May 22–27, 2022, pp 2696–2704
6.
go back to reference Kingma DP, Welling M (2014) Auto-encoding variational bayes. In: Bengio Y, LeCun Y (eds) International conference on learning representations, Banff, AB, Canada Kingma DP, Welling M (2014) Auto-encoding variational bayes. In: Bengio Y, LeCun Y (eds) International conference on learning representations, Banff, AB, Canada
7.
go back to reference Rezende DJ, Mohamed S, Wierstra D (2014) Stochastic backpropagation and approximate inference in deep generative models. In: Proceedings of the 31th international conference on machine learning, Beijing, China, pp 1278–1286 Rezende DJ, Mohamed S, Wierstra D (2014) Stochastic backpropagation and approximate inference in deep generative models. In: Proceedings of the 31th international conference on machine learning, Beijing, China, pp 1278–1286
8.
go back to reference Zhang L, Hu X, Wang B, Zhou D, Zhang Q, Cao Y (2022) Pre-training and fine-tuning neural topic model: a simple yet effective approach to incorporating external knowledge. In: Muresan S, Nakov P, Villavicencio A (eds) Proceedings of the 60th annual meeting of the association for computational linguistics (volume 1: long papers), ACL 2022, Dublin, Ireland, May 22–27, 2022, pp 5980–5989 Zhang L, Hu X, Wang B, Zhou D, Zhang Q, Cao Y (2022) Pre-training and fine-tuning neural topic model: a simple yet effective approach to incorporating external knowledge. In: Muresan S, Nakov P, Villavicencio A (eds) Proceedings of the 60th annual meeting of the association for computational linguistics (volume 1: long papers), ACL 2022, Dublin, Ireland, May 22–27, 2022, pp 5980–5989
9.
go back to reference Wang B, Zhang L, Zhou D, Cao Y, Ding J (2023) Neural topic modeling based on cycle adversarial training and contrastive learning. In: Rogers A, Boyd-Graber JL, Okazaki N (eds) Findings of the association for computational linguistics: ACL 2023, Toronto, Canada, July 9–14, 2023, pp 9720–9731 Wang B, Zhang L, Zhou D, Cao Y, Ding J (2023) Neural topic modeling based on cycle adversarial training and contrastive learning. In: Rogers A, Boyd-Graber JL, Okazaki N (eds) Findings of the association for computational linguistics: ACL 2023, Toronto, Canada, July 9–14, 2023, pp 9720–9731
10.
go back to reference Li R, González-Pizarro F, Xing L, Murray G, Carenini G (2023) Diversity-aware coherence loss for improving neural topic models. In: Rogers A, Boyd-Graber JL, Okazaki N (eds) Proceedings of the 61st annual meeting of the association for computational linguistics (volume 2: short papers), ACL 2023, Toronto, Canada, July 9–14, 2023, pp 1710–1722 Li R, González-Pizarro F, Xing L, Murray G, Carenini G (2023) Diversity-aware coherence loss for improving neural topic models. In: Rogers A, Boyd-Graber JL, Okazaki N (eds) Proceedings of the 61st annual meeting of the association for computational linguistics (volume 2: short papers), ACL 2023, Toronto, Canada, July 9–14, 2023, pp 1710–1722
11.
go back to reference Wallach HMA, Mimno D (2009) Rethinking LDA: why priors matter. In: Advances in neural information processing systems Wallach HMA, Mimno D (2009) Rethinking LDA: why priors matter. In: Advances in neural information processing systems
12.
go back to reference Nan F, Ding R, Nallapati R, Xiang B (2019) Topic modeling with Wasserstein autoencoders. In: Proceedings of the 57th annual meeting of the association for computational linguistics. Association for Computational Linguistics, Florence, Italy, pp 6345–6381 Nan F, Ding R, Nallapati R, Xiang B (2019) Topic modeling with Wasserstein autoencoders. In: Proceedings of the 57th annual meeting of the association for computational linguistics. Association for Computational Linguistics, Florence, Italy, pp 6345–6381
13.
go back to reference Srivastava A, Sutton C (2017) Autoencoding variational inference for topic models. In: International conference on learning representations. OpenReview.net, Toulon, France Srivastava A, Sutton C (2017) Autoencoding variational inference for topic models. In: International conference on learning representations. OpenReview.net, Toulon, France
14.
go back to reference Figurnov M, Mohamed S, Mnih A (2018) Implicit reparameterization gradients. In: Bengio S, Wallach HM, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Neural information processing systems, pp 439–450 Figurnov M, Mohamed S, Mnih A (2018) Implicit reparameterization gradients. In: Bengio S, Wallach HM, Larochelle H, Grauman K, Cesa-Bianchi N, Garnett R (eds) Neural information processing systems, pp 439–450
15.
go back to reference Burkhardt S, Kramer S (2019) Decoupling sparsity and smoothness in the Dirichlet variational autoencoder topic model. J Mach Learn Res 20:131–113127MathSciNet Burkhardt S, Kramer S (2019) Decoupling sparsity and smoothness in the Dirichlet variational autoencoder topic model. J Mach Learn Res 20:131–113127MathSciNet
16.
go back to reference Naesseth CA, Ruiz FJR, Linderman SW, Blei DM (2017) Reparameterization gradients through acceptance-rejection sampling algorithms. In: Singh A, Zhu XJ (eds) Artificial Intelligence and Statistics, vol 54. Proceedings of machine learning research. PMLR, Fort Lauderdale, pp 489–498 Naesseth CA, Ruiz FJR, Linderman SW, Blei DM (2017) Reparameterization gradients through acceptance-rejection sampling algorithms. In: Singh A, Zhu XJ (eds) Artificial Intelligence and Statistics, vol 54. Proceedings of machine learning research. PMLR, Fort Lauderdale, pp 489–498
17.
go back to reference Joo W, Lee W, Park S, Moon I (2020) Dirichlet variational autoencoder. Pattern Recognit 107:107514CrossRef Joo W, Lee W, Park S, Moon I (2020) Dirichlet variational autoencoder. Pattern Recognit 107:107514CrossRef
18.
go back to reference Zhang H, Chen B, Guo D, Zhou M (2018) WHAI: Weibull hybrid autoencoding inference for deep topic modeling. In: International conference on learning representations. OpenReview.net, Vancouver, BC, Canada Zhang H, Chen B, Guo D, Zhou M (2018) WHAI: Weibull hybrid autoencoding inference for deep topic modeling. In: International conference on learning representations. OpenReview.net, Vancouver, BC, Canada
19.
go back to reference Li Y, Wang C, Duan Z, Wang D, Chen B, An B, Zhou M (2022) Alleviating “posterior collapse” in deep topic models via policy gradient. In: NeurIPS Li Y, Wang C, Duan Z, Wang D, Chen B, An B, Zhou M (2022) Alleviating “posterior collapse” in deep topic models via policy gradient. In: NeurIPS
20.
go back to reference Stirn A, Jebara T, Knowles DA (2019) A new distribution on the simplex with auto-encoding applications. In: Wallach HM, Larochelle H, Beygelzimer A, d’Alché-Buc F, Fox EB, Garnett R (eds) Neural information processing systems. Vancouver, BC, Canada, pp 13670–13680 Stirn A, Jebara T, Knowles DA (2019) A new distribution on the simplex with auto-encoding applications. In: Wallach HM, Larochelle H, Beygelzimer A, d’Alché-Buc F, Fox EB, Garnett R (eds) Neural information processing systems. Vancouver, BC, Canada, pp 13670–13680
21.
go back to reference Isonuma M, Mori J, Bollegala D, Sakata I (2020) Tree-structured neural topic model. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 800–806 Isonuma M, Mori J, Bollegala D, Sakata I (2020) Tree-structured neural topic model. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 800–806
22.
go back to reference Zhang Z, Zhang X, Rao Y (2022) Nonparametric forest-structured neural topic modeling. In: Proceedings of the 29th international conference on computational linguistics, Gyeongju, Republic of Korea, pp 2585–2597 Zhang Z, Zhang X, Rao Y (2022) Nonparametric forest-structured neural topic modeling. In: Proceedings of the 29th international conference on computational linguistics, Gyeongju, Republic of Korea, pp 2585–2597
23.
go back to reference Hao X, Shafto P (2023) Coupled variational autoencoder. In: Krause A, Brunskill E, Cho K, Engelhardt B, Sabato S, Scarlett J (eds) International conference on machine learning, ICML 2023, 23–29 July 2023, Honolulu, Hawaii, USA, vol 202, pp 12546–12555 Hao X, Shafto P (2023) Coupled variational autoencoder. In: Krause A, Brunskill E, Cho K, Engelhardt B, Sabato S, Scarlett J (eds) International conference on machine learning, ICML 2023, 23–29 July 2023, Honolulu, Hawaii, USA, vol 202, pp 12546–12555
24.
go back to reference Estermann B, Wattenhofer R (2023) DAVA: disentangling adversarial variational autoencoder. In: The eleventh international conference on learning representations, ICLR 2023, Kigali, Rwanda, May 1–5, 2023 Estermann B, Wattenhofer R (2023) DAVA: disentangling adversarial variational autoencoder. In: The eleventh international conference on learning representations, ICLR 2023, Kigali, Rwanda, May 1–5, 2023
25.
go back to reference Zhao Y, Linderman SW (2023) Revisiting structured variational autoencoders. In: Krause A, Brunskill E, Cho K, Engelhardt B, Sabato S, Scarlett J (eds) International conference on machine learning, ICML 2023, 23–29 July 2023, Honolulu, Hawaii, USA. Proceedings of machine learning research, vol 202, pp 42046–42057 Zhao Y, Linderman SW (2023) Revisiting structured variational autoencoders. In: Krause A, Brunskill E, Cho K, Engelhardt B, Sabato S, Scarlett J (eds) International conference on machine learning, ICML 2023, 23–29 July 2023, Honolulu, Hawaii, USA. Proceedings of machine learning research, vol 202, pp 42046–42057
26.
go back to reference Detkov A, Salameh M, Qharabagh MF, Zhang J, Luwei R, Jui S, Niu D (2023) Reparameterization through spatial gradient scaling. In: The eleventh international conference on learning representations, ICLR 2023, Kigali, Rwanda, May 1–5, 2023 Detkov A, Salameh M, Qharabagh MF, Zhang J, Luwei R, Jui S, Niu D (2023) Reparameterization through spatial gradient scaling. In: The eleventh international conference on learning representations, ICLR 2023, Kigali, Rwanda, May 1–5, 2023
27.
go back to reference Li C, Qiu Q, Zhang Z, Guo J, Cheng X (2023) Learning adversarially robust sparse networks via weight reparameterization. In: Williams B, Chen Y, Neville J (eds) Thirty-seventh AAAI conference on artificial intelligence, AAAI 2023, thirty-fifth conference on innovative applications of artificial intelligence, IAAI 2023, thirteenth symposium on educational advances in artificial intelligence, EAAI 2023, Washington, DC, USA, February 7–14, 2023, pp 8527–8535 Li C, Qiu Q, Zhang Z, Guo J, Cheng X (2023) Learning adversarially robust sparse networks via weight reparameterization. In: Williams B, Chen Y, Neville J (eds) Thirty-seventh AAAI conference on artificial intelligence, AAAI 2023, thirty-fifth conference on innovative applications of artificial intelligence, IAAI 2023, thirteenth symposium on educational advances in artificial intelligence, EAAI 2023, Washington, DC, USA, February 7–14, 2023, pp 8527–8535
28.
go back to reference Miao Y, Yu L, Blunsom P (2016) Neural variational inference for text processing. In: Balcan M, Weinberger KQ (eds) International conference on machine learning, vol 48. JMLR.org, New York City, pp 1727–1736 Miao Y, Yu L, Blunsom P (2016) Neural variational inference for text processing. In: Balcan M, Weinberger KQ (eds) International conference on machine learning, vol 48. JMLR.org, New York City, pp 1727–1736
29.
go back to reference Miao Y, Grefenstette E, Blunsom P (2017) Discovering discrete latent topics with neural variational inference. In: Precup D, Teh YW (eds) International conference on machine learning, vol 70. PMLR, Sydney, pp 2410–2419 Miao Y, Grefenstette E, Blunsom P (2017) Discovering discrete latent topics with neural variational inference. In: Precup D, Teh YW (eds) International conference on machine learning, vol 70. PMLR, Sydney, pp 2410–2419
30.
go back to reference Wang R, Hu X, Zhou D, He Y, Xiong Y, Ye C, Xu H (2020) Neural topic modeling with bidirectional adversarial training. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 340–350 Wang R, Hu X, Zhou D, He Y, Xiong Y, Ye C, Xu H (2020) Neural topic modeling with bidirectional adversarial training. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 340–350
31.
go back to reference Kingma DP, Mohamed S, Rezende DJ, Welling M (2014) Semi-supervised learning with deep generative models. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Neural information processing systems, pp 3581–3589 Kingma DP, Mohamed S, Rezende DJ, Welling M (2014) Semi-supervised learning with deep generative models. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Neural information processing systems, pp 3581–3589
32.
go back to reference Hoffman MD, Johnson MJ (2016) ELBO surgery: yet another way to carve up the variational evidence lower bound. In: Workshop in advances in approximate Bayesian inference, NIPS, vol 1 Hoffman MD, Johnson MJ (2016) ELBO surgery: yet another way to carve up the variational evidence lower bound. In: Workshop in advances in approximate Bayesian inference, NIPS, vol 1
33.
go back to reference Nalisnick ET, Smyth P (2017) Stick-breaking variational autoencoders. In: International conference on learning representations. OpenReview.net, Toulon, France Nalisnick ET, Smyth P (2017) Stick-breaking variational autoencoders. In: International conference on learning representations. OpenReview.net, Toulon, France
34.
go back to reference Sethuraman J (1994) A constructive definition of Dirichlet priors. Statistica Sinica 639–650 Sethuraman J (1994) A constructive definition of Dirichlet priors. Statistica Sinica 639–650
35.
36.
go back to reference Frigyik B A, GMR Kapila A (2010) Introduction to the Dirichlet distribution and related processes. Department of Electrical Engineering, University of Washington, 6–127 Frigyik B A, GMR Kapila A (2010) Introduction to the Dirichlet distribution and related processes. Department of Electrical Engineering, University of Washington, 6–127
37.
go back to reference Kumaraswamy P (1980) A generalized probability density function for double-bounded random processes. J Hydrol 46(1–2):79–88CrossRef Kumaraswamy P (1980) A generalized probability density function for double-bounded random processes. J Hydrol 46(1–2):79–88CrossRef
38.
go back to reference Hinz T, Wermter S (2018) Inferencing based on unsupervised learning of disentangled representations. In: European symposium on artificial neural networks Hinz T, Wermter S (2018) Inferencing based on unsupervised learning of disentangled representations. In: European symposium on artificial neural networks
39.
go back to reference Hoeffding W (1992) A class of statistics with asymptotically normal distribution. In: Breakthroughs in statistics. Springer, New York, pp 308–334 Hoeffding W (1992) A class of statistics with asymptotically normal distribution. In: Breakthroughs in statistics. Springer, New York, pp 308–334
40.
go back to reference Miao Y, Grefenstette E, Blunsom P (2017) Discovering discrete latent topics with neural variational inference. In: International conference on machine learning. PMLR, pp 2410–2419 Miao Y, Grefenstette E, Blunsom P (2017) Discovering discrete latent topics with neural variational inference. In: International conference on machine learning. PMLR, pp 2410–2419
41.
go back to reference Nassar J, Linderman S, Bugallo M, Park IM (2019) Tree-structured recurrent switching linear dynamical systems for multi-scale modeling. In: International conference on learning representations Nassar J, Linderman S, Bugallo M, Park IM (2019) Tree-structured recurrent switching linear dynamical systems for multi-scale modeling. In: International conference on learning representations
42.
go back to reference Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780CrossRef Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780CrossRef
43.
go back to reference Perrone V, Jenkins PA, Spanò D, Teh YW (2017) Poisson random fields for dynamic feature models. J Mach Learn Res 18(127):1–45MathSciNet Perrone V, Jenkins PA, Spanò D, Teh YW (2017) Poisson random fields for dynamic feature models. J Mach Learn Res 18(127):1–45MathSciNet
44.
go back to reference Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Bengio Y, LeCun Y (eds) ICLR Kingma DP, Ba J (2015) Adam: a method for stochastic optimization. In: Bengio Y, LeCun Y (eds) ICLR
45.
go back to reference Bowman SR, Vilnis L, Vinyals O, Dai AM, Józefowicz R, Bengio S (2016) Generating sentences from a continuous space. In: Goldberg Y, Riezler S (eds) Computational natural language learning, CoNLL. ACL, Berlin, pp 10–21 Bowman SR, Vilnis L, Vinyals O, Dai AM, Józefowicz R, Bengio S (2016) Generating sentences from a continuous space. In: Goldberg Y, Riezler S (eds) Computational natural language learning, CoNLL. ACL, Berlin, pp 10–21
46.
go back to reference Bai H, Chen Z, Lyu MR, King I, Xu Z (2018) Neural relational topic models for scientific article analysis. In: Cuzzocrea A, Allan J, Paton NW, Srivastava D, Agrawal R, Broder AZ, Zaki MJ, Candan KS, Labrinidis A, Schuster A, Wang H (eds) Conference on information and knowledge management. ACM, Torino, pp 27–36 Bai H, Chen Z, Lyu MR, King I, Xu Z (2018) Neural relational topic models for scientific article analysis. In: Cuzzocrea A, Allan J, Paton NW, Srivastava D, Agrawal R, Broder AZ, Zaki MJ, Candan KS, Labrinidis A, Schuster A, Wang H (eds) Conference on information and knowledge management. ACM, Torino, pp 27–36
Metadata
Title
Applying Kumaraswamy distribution on stick-breaking process: a Dirichlet neural topic model approach
Authors
Jihong Ouyang
Teng Wang
Jingyue Cao
Yiming Wang
Publication date
27-04-2024
Publisher
Springer London
Published in
Neural Computing and Applications
Print ISSN: 0941-0643
Electronic ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-024-09783-y

Premium Partner