Skip to main content
Top

2021 | OriginalPaper | Chapter

Decoupling Sparsity and Smoothness in Dirichlet Belief Networks

Authors : Yaqiong Li, Xuhui Fan, Ling Chen, Bin Li, Scott A. Sisson

Published in: Machine Learning and Knowledge Discovery in Databases. Research Track

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The Dirichlet Belief Network (DirBN) has been proposed as a promising deep generative model that uses Dirichlet distributions to form layer-wise connections and thereby construct a multi-stochastic layered deep architecture. However, the DirBN cannot simultaneously achieve both sparsity, whereby the generated latent distributions place weights on a subset of components, and smoothness, which requires that the posterior distribution should not be dominated by the data. To address this limitation we introduce the sparse and smooth Dirichlet Belief Network (ssDirBN) which can achieve both sparsity and smoothness simultaneously, thereby increasing modelling flexibility over the DirBN. This gain is achieved by introducing binary variables to indicate whether each entity’s latent distribution at each layer uses a particular component. As a result, each latent distribution may use only a subset of components in each layer, and smoothness is enforced on this subset. Extra efforts on modifying the models are also made to fix the issues which is caused by introducing these binary variables. Extensive experimental results on real-world data show significant performance improvements of ssDirBN over state-of-the-art models in terms of both enhanced model predictions and reduced model complexity.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Airoldi, E.M., Blei, D.M., Fienberg, S.E., Xing, E.P.: Mixed membership stochastic block models. J. Mach. Learn. Res. 9, 1981–2014 (2008)MATH Airoldi, E.M., Blei, D.M., Fienberg, S.E., Xing, E.P.: Mixed membership stochastic block models. J. Mach. Learn. Res. 9, 1981–2014 (2008)MATH
2.
go back to reference Burkhardt, S., Kramer, S.: Decoupling sparsity and smoothness in the Dirichlet variational autoencoder topic model. J. Mach. Learn. Res. 20(131), 1–27 (2019)MathSciNetMATH Burkhardt, S., Kramer, S.: Decoupling sparsity and smoothness in the Dirichlet variational autoencoder topic model. J. Mach. Learn. Res. 20(131), 1–27 (2019)MathSciNetMATH
3.
go back to reference Dunson, D.B., Herring, A.H.: Bayesian latent variable models for mixed discrete outcomes. Biostatistics 6(1), 11–25 (2005)CrossRef Dunson, D.B., Herring, A.H.: Bayesian latent variable models for mixed discrete outcomes. Biostatistics 6(1), 11–25 (2005)CrossRef
4.
go back to reference Fan, X., Li, B., Li, C., Sisson, S., Chen, L.: Scalable deep generative relational model with high-order node dependence. In: NeurIPS, pp. 12637–12647 (2019) Fan, X., Li, B., Li, C., Sisson, S., Chen, L.: Scalable deep generative relational model with high-order node dependence. In: NeurIPS, pp. 12637–12647 (2019)
5.
go back to reference Fan, X., Li, B., Li, Y., Sisson, S.: Poisson-randomised DirBN: large mutation is needed in dirichlet belief networks. In: ICML (2021) Fan, X., Li, B., Li, Y., Sisson, S.: Poisson-randomised DirBN: large mutation is needed in dirichlet belief networks. In: ICML (2021)
6.
go back to reference Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML, pp. 1050–1059 (2016) Gal, Y., Ghahramani, Z.: Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In: ICML, pp. 1050–1059 (2016)
7.
go back to reference Guo, D., Chen, B., Lu, R., Zhou, M.: Recurrent hierarchical topic-guided RNN for language generation. In: ICML, pp. 3810–3821 (2020) Guo, D., Chen, B., Lu, R., Zhou, M.: Recurrent hierarchical topic-guided RNN for language generation. In: ICML, pp. 3810–3821 (2020)
8.
go back to reference Guo, D., Chen, B., Zhang, H., Zhou, M.: Deep Poisson gamma dynamical systems. In: NeurIPS, pp. 8442–8452 (2018) Guo, D., Chen, B., Zhang, H., Zhou, M.: Deep Poisson gamma dynamical systems. In: NeurIPS, pp. 8442–8452 (2018)
9.
go back to reference Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NIPS, pp. 1024–1034 (2017) Hamilton, W., Ying, Z., Leskovec, J.: Inductive representation learning on large graphs. In: NIPS, pp. 1024–1034 (2017)
10.
go back to reference Hu, C., Rai, P., Carin, L.: Deep generative models for relational data with side information. In: ICML, pp. 1578–1586 (2017) Hu, C., Rai, P., Carin, L.: Deep generative models for relational data with side information. In: ICML, pp. 1578–1586 (2017)
12.
go back to reference Li, Y., Fan, X., Chen, L., Li, B., Yu, Z., Sisson, S.A.: Recurrent Dirichlet belief networks for interpretable dynamic relational data modelling. In: IJCAI, pp. 2470–2476 (2020) Li, Y., Fan, X., Chen, L., Li, B., Yu, Z., Sisson, S.A.: Recurrent Dirichlet belief networks for interpretable dynamic relational data modelling. In: IJCAI, pp. 2470–2476 (2020)
13.
go back to reference Schein, A., Linderman, S., Zhou, M., Blei, D., Wallach, H.: Poisson-randomized gamma dynamical systems. In: NeurIPS, pp. 782–793 (2019) Schein, A., Linderman, S., Zhou, M., Blei, D., Wallach, H.: Poisson-randomized gamma dynamical systems. In: NeurIPS, pp. 782–793 (2019)
14.
go back to reference Schein, A., Wallach, H., Zhou, M.: Poisson-gamma dynamical systems. In: NIPS, pp. 5005–5013 (2016) Schein, A., Wallach, H., Zhou, M.: Poisson-gamma dynamical systems. In: NIPS, pp. 5005–5013 (2016)
15.
go back to reference Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29, 93 (2008) Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AI Mag. 29, 93 (2008)
16.
go back to reference Wang, C., Blei, D.: Decoupling sparsity and smoothness in the discrete hierarchical Dirichlet process. In: NIPS, pp. 1982–1989. Curran Associates Inc. (2009) Wang, C., Blei, D.: Decoupling sparsity and smoothness in the discrete hierarchical Dirichlet process. In: NIPS, pp. 1982–1989. Curran Associates Inc. (2009)
17.
go back to reference Yang, S., Koeppl, H.: A Poisson gamma probabilistic model for latent node-group memberships in dynamic networks. In: AAAI (2018) Yang, S., Koeppl, H.: A Poisson gamma probabilistic model for latent node-group memberships in dynamic networks. In: AAAI (2018)
18.
go back to reference Zhang, H., Chen, B., Guo, D., Zhou, M.: WHAI: Weibull hybrid autoencoding inference for deep topic modeling. In: International Conference on Learning Representations (2018) Zhang, H., Chen, B., Guo, D., Zhou, M.: WHAI: Weibull hybrid autoencoding inference for deep topic modeling. In: International Conference on Learning Representations (2018)
19.
go back to reference Zhao, H., Du, L., Buntine, W.: Leveraging node attributes for incomplete relational data. In: ICML, pp. 4072–4081 (2017) Zhao, H., Du, L., Buntine, W.: Leveraging node attributes for incomplete relational data. In: ICML, pp. 4072–4081 (2017)
20.
go back to reference Zhao, H., Du, L., Buntine, W., Zhou, M.: Dirichlet belief networks for topic structure learning. In: NeurIPS, pp. 7955–7966 (2018) Zhao, H., Du, L., Buntine, W., Zhou, M.: Dirichlet belief networks for topic structure learning. In: NeurIPS, pp. 7955–7966 (2018)
21.
go back to reference Zhou, M.: Infinite edge partition models for overlapping community detection and link prediction. In: AISTATS, pp. 1135–1143 (2015) Zhou, M.: Infinite edge partition models for overlapping community detection and link prediction. In: AISTATS, pp. 1135–1143 (2015)
22.
go back to reference Zhou, M., Cong, Y., Chen, B.: Augmentable gamma belief networks. J. Mach. Learn. Res. 17(163), 1–44 (2016)MathSciNetMATH Zhou, M., Cong, Y., Chen, B.: Augmentable gamma belief networks. J. Mach. Learn. Res. 17(163), 1–44 (2016)MathSciNetMATH
23.
go back to reference Zitnik, M., Leskove, J.: Predicting multicellular function through multi-layer tissue networks. Bioinformatics 33, i190–i198 (2017)CrossRef Zitnik, M., Leskove, J.: Predicting multicellular function through multi-layer tissue networks. Bioinformatics 33, i190–i198 (2017)CrossRef
Metadata
Title
Decoupling Sparsity and Smoothness in Dirichlet Belief Networks
Authors
Yaqiong Li
Xuhui Fan
Ling Chen
Bin Li
Scott A. Sisson
Copyright Year
2021
DOI
https://doi.org/10.1007/978-3-030-86520-7_10

Premium Partner