Skip to main content
Top
Published in: Cognitive Computation 6/2019

25-06-2018

A Novel Deep Density Model for Unsupervised Learning

Authors: Xi Yang, Kaizhu Huang, Rui Zhang, John Y. Goulermas

Published in: Cognitive Computation | Issue 6/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Density models are fundamental in machine learning and have received a widespread application in practical cognitive modeling tasks and learning problems. In this work, we introduce a novel deep density model, referred to as deep mixtures of factor analyzers with common loadings (DMCFA), with an efficient greedy layer-wise unsupervised learning algorithm. The model employs a mixture of factor analyzers sharing common component loadings in each layer. The common loadings can be considered to be a feature selection or reduction matrix which makes this new model more physically meaningful. Importantly, sharing common components is capable of reducing both the number of free parameters and computation complexity remarkably. Consequently, DMCFA makes inference and learning rely on a dramatically more succinct model and avoids sacrificing its flexibility in estimating the data density by utilizing Gaussian distributions as the priors. Our model is evaluated on five real datasets and compared to three other competitive models including mixtures of factor analyzers (MFA), MFA with common loadings (MCFA), deep mixtures of factor analyzers (DMFA), and their collapsed counterparts. The results demonstrate the superiority of the proposed model in the tasks of density estimation, clustering, and generation.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
The greedy layer-wise algorithm is a generative model with many layers of hidden variables.
 
2
One component of the first layer can be divided into Mc sub-components. The size of the sub-components in each first-layer component need not be the same.
 
3
The superscript represents which layer these variables belong to. Since in the second layer the sub-components corresponding to a component of the first layer share a common loading and the variance of the independent noise, \(\mathbf {A}_{c}^{(2)}\) and \(\mathbf {{\Psi }}_{c}^{(2)}\) are marked with the subscript c. d corresponds to the subspace dimensionality in the second layer, where d < q.
 
Literature
1.
go back to reference Adams RP, Wallach HM, Ghahramani Z. Learning the structure of deep sparse graphical models. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics; 2010. p. 1–8. Adams RP, Wallach HM, Ghahramani Z. Learning the structure of deep sparse graphical models. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics; 2010. p. 1–8.
2.
go back to reference Arnold L, Ollivier Y. Layer-wise learning of deep generative models. CoRR arXiv:1212.1524; 2012. Arnold L, Ollivier Y. Layer-wise learning of deep generative models. CoRR arXiv:1212.​1524; 2012.
3.
go back to reference Baek J, McLachlan GJ. Mixtures of common t-factor analyzers for clustering high-dimensional microarray data. Bioinformatics 2011;27(9):1269–1276.CrossRef Baek J, McLachlan GJ. Mixtures of common t-factor analyzers for clustering high-dimensional microarray data. Bioinformatics 2011;27(9):1269–1276.CrossRef
4.
go back to reference Baek J, McLachlan GJ, Flack LK. Mixtures of factor analyzers with common factor loadings: applications to the clustering and visualization of high-dimensional data. IEEE Trans Pattern Anal Mach Intell 2010;32(7):1298–1309.CrossRef Baek J, McLachlan GJ, Flack LK. Mixtures of factor analyzers with common factor loadings: applications to the clustering and visualization of high-dimensional data. IEEE Trans Pattern Anal Mach Intell 2010;32(7):1298–1309.CrossRef
5.
go back to reference Bengio Y. Learning deep architectures for AI. Found Trends Mach Learn 2009;2(1):1–127.CrossRef Bengio Y. Learning deep architectures for AI. Found Trends Mach Learn 2009;2(1):1–127.CrossRef
6.
go back to reference Chen B, Polatkan G, Sapiro G, Dunson DB, Carin L. The hierarchical beta process for convolutional factor analysis and deep learning. In: Proceedings of the 28th International conference on machine learning; 2011. p. 361–368. Chen B, Polatkan G, Sapiro G, Dunson DB, Carin L. The hierarchical beta process for convolutional factor analysis and deep learning. In: Proceedings of the 28th International conference on machine learning; 2011. p. 361–368.
7.
go back to reference Everett B. An introduction to latent variable models. Springer Science & Business Media; 2013. Everett B. An introduction to latent variable models. Springer Science & Business Media; 2013.
8.
go back to reference Ghahramani Z. Probabilistic machine learning and artificial intelligence. Nature 2015;521(7553):452.CrossRef Ghahramani Z. Probabilistic machine learning and artificial intelligence. Nature 2015;521(7553):452.CrossRef
10.
go back to reference Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput 2006; 18(7):1527–1554.CrossRef Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets. Neural Comput 2006; 18(7):1527–1554.CrossRef
11.
go back to reference Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006; 313(5786):504– 507.CrossRef Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science 2006; 313(5786):504– 507.CrossRef
12.
go back to reference Jiang Z, Zheng Y, Tan H, Tang B, Zhou H. Variational deep embedding: an unsupervised and generative approach to clustering. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence; 2017. p. 1965–1972. Jiang Z, Zheng Y, Tan H, Tang B, Zhou H. Variational deep embedding: an unsupervised and generative approach to clustering. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence; 2017. p. 1965–1972.
13.
go back to reference Johnson B. High resolution urban land cover classification using a competitive multi-scale object-based approach. Remote Sens Lett 2013;4(2):131–140.CrossRef Johnson B. High resolution urban land cover classification using a competitive multi-scale object-based approach. Remote Sens Lett 2013;4(2):131–140.CrossRef
14.
go back to reference Johnson B, Xie Z. Classifying a high resolution image of an urban area using super-object information. ISPRS J Photogramm Remote Sens 2013;83:40–49.CrossRef Johnson B, Xie Z. Classifying a high resolution image of an urban area using super-object information. ISPRS J Photogramm Remote Sens 2013;83:40–49.CrossRef
15.
go back to reference Kung SY, Mak MW, Lin SH. Biometric authentication: a machine learning approach, chap. Expectation-maximization theory. Upper Saddle River: Prentice Hall Professional Technical Reference; 2005. Kung SY, Mak MW, Lin SH. Biometric authentication: a machine learning approach, chap. Expectation-maximization theory. Upper Saddle River: Prentice Hall Professional Technical Reference; 2005.
16.
go back to reference Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998;86(11):2278–2324.CrossRef Lecun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proc IEEE 1998;86(11):2278–2324.CrossRef
17.
go back to reference Likas A, Vlassis N, Verbeek JJ. The global k-means clustering algorithm. Pattern Recogn 2003;36(2): 451–461.CrossRef Likas A, Vlassis N, Verbeek JJ. The global k-means clustering algorithm. Pattern Recogn 2003;36(2): 451–461.CrossRef
18.
go back to reference McLachlan G, Krishnan T. The EM algorithm and extensions. Wiley; 2007. vol. 382. McLachlan G, Krishnan T. The EM algorithm and extensions. Wiley; 2007. vol. 382.
19.
go back to reference McLachlan GJ, Peel D. Mixtures of factor analyzers. In: International Conference on machine learning (ICML); 2000. p. 599–606. McLachlan GJ, Peel D. Mixtures of factor analyzers. In: International Conference on machine learning (ICML); 2000. p. 599–606.
20.
go back to reference Nene SA, Nayar SK, Murase H. 1996. Columbia object image library (coil-20). Tech. rep. Technical Report CUCS-005-96. Nene SA, Nayar SK, Murase H. 1996. Columbia object image library (coil-20). Tech. rep. Technical Report CUCS-005-96.
21.
23.
go back to reference Salakhutdinov R, Mnih A, Hinton GE. Restricted boltzmann machines for collaborative filtering. In: Machine learning, proceedings of the twenty-fourth international conference (ICML); 2007. p. 791–798. Salakhutdinov R, Mnih A, Hinton GE. Restricted boltzmann machines for collaborative filtering. In: Machine learning, proceedings of the twenty-fourth international conference (ICML); 2007. p. 791–798.
24.
go back to reference Tang Y, Salakhutdinov R, Hinton GE. Deep mixtures of factor analysers. In: Proceedings of the 29th international conference on machine learning. ICML; 2012. Tang Y, Salakhutdinov R, Hinton GE. Deep mixtures of factor analysers. In: Proceedings of the 29th international conference on machine learning. ICML; 2012.
25.
go back to reference Tortora C, McNicholas PD, Browne RP. A mixture of generalized hyperbolic factor analyzers. Adv Data Anal Classif 2016;10(4):423–440.CrossRef Tortora C, McNicholas PD, Browne RP. A mixture of generalized hyperbolic factor analyzers. Adv Data Anal Classif 2016;10(4):423–440.CrossRef
26.
go back to reference Wang W. Mixtures of common factor analyzers for high-dimensional data with missing information. J Multivar Anal 2013;117:120–133.CrossRef Wang W. Mixtures of common factor analyzers for high-dimensional data with missing information. J Multivar Anal 2013;117:120–133.CrossRef
27.
go back to reference Wei H, Dong Z. V4 neural network model for shape-based feature extraction and object discrimination. Cogn Comput 2015;7(6):753–762.CrossRef Wei H, Dong Z. V4 neural network model for shape-based feature extraction and object discrimination. Cogn Comput 2015;7(6):753–762.CrossRef
29.
go back to reference Yang X, Huang K, Goulermas JY, Zhang R. Joint learning of unsupervised dimensionality reduction and gaussian mixture model. Neural Process Lett 2017;45(3):791–806.CrossRef Yang X, Huang K, Goulermas JY, Zhang R. Joint learning of unsupervised dimensionality reduction and gaussian mixture model. Neural Process Lett 2017;45(3):791–806.CrossRef
30.
go back to reference Yang X, Huang K, Zhang R. Deep mixtures of factor analyzers with common loadings: aa novel deep generative approach to clustering. In: Neural Information processing - 24rd international conference, ICONIP; 2017. Yang X, Huang K, Zhang R. Deep mixtures of factor analyzers with common loadings: aa novel deep generative approach to clustering. In: Neural Information processing - 24rd international conference, ICONIP; 2017.
31.
go back to reference Zeng N, Wang Z, Zhang H, Liu W, Alsaadi FE. Deep belief networks for quantitative analysis of a gold immunochromatographic strip. Cogn Comput 2016;8(4):684–692.CrossRef Zeng N, Wang Z, Zhang H, Liu W, Alsaadi FE. Deep belief networks for quantitative analysis of a gold immunochromatographic strip. Cogn Comput 2016;8(4):684–692.CrossRef
32.
go back to reference Zhang J, Ding S, Zhang N, Xue Y. Weight uncertainty in Boltzmann machine. Cogn Comput 2016; 8(6):1064–1073.CrossRef Zhang J, Ding S, Zhang N, Xue Y. Weight uncertainty in Boltzmann machine. Cogn Comput 2016; 8(6):1064–1073.CrossRef
33.
go back to reference Zheng Y, Cai Y, Zhong G, Chherawala Y, Shi Y, Dong J. Stretching deep architectures for text recognition. In: Document Analysis and recognition (ICDAR)–13th international conference. IEEE; 2015. p. 236–240. Zheng Y, Cai Y, Zhong G, Chherawala Y, Shi Y, Dong J. Stretching deep architectures for text recognition. In: Document Analysis and recognition (ICDAR)–13th international conference. IEEE; 2015. p. 236–240.
34.
go back to reference Zhong G, Yan S, Huang K, Cai Y, Dong J. Reducing and stretching deep convolutional activation features for accurate image classification. Cogn Comput 2018;10(1):179–186.CrossRef Zhong G, Yan S, Huang K, Cai Y, Dong J. Reducing and stretching deep convolutional activation features for accurate image classification. Cogn Comput 2018;10(1):179–186.CrossRef
Metadata
Title
A Novel Deep Density Model for Unsupervised Learning
Authors
Xi Yang
Kaizhu Huang
Rui Zhang
John Y. Goulermas
Publication date
25-06-2018
Publisher
Springer US
Published in
Cognitive Computation / Issue 6/2019
Print ISSN: 1866-9956
Electronic ISSN: 1866-9964
DOI
https://doi.org/10.1007/s12559-018-9566-9

Other articles of this Issue 6/2019

Cognitive Computation 6/2019 Go to the issue

Premium Partner