Skip to main content
Top
Published in: International Journal of Machine Learning and Cybernetics 5/2017

31-05-2016 | Original Article

Research on denoising sparse autoencoder

Authors: Lingheng Meng, Shifei Ding, Yu Xue

Published in: International Journal of Machine Learning and Cybernetics | Issue 5/2017

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Autoencoder can learn the structure of data adaptively and represent data efficiently. These properties make autoencoder not only suit huge volume and variety of data well but also overcome expensive designing cost and poor generalization. Moreover, using autoencoder in deep learning to implement feature extraction could draw better classification accuracy. However, there exist poor robustness and overfitting problems when utilizing autoencoder. In order to extract useful features, meanwhile improve robustness and overcome overfitting, we studied denoising sparse autoencoder through adding corrupting operation and sparsity constraint to traditional autoencoder. The results suggest that different autoencoders mentioned in this paper have some close relation and the model we researched can extract interesting features which can reconstruct original data well. In addition, all results show a promising approach to utilizing the proposed autoencoder to build deep models.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Show more products
Footnotes
2
M. Schmidt. minFunc: unconstrained differentiable multivariate optimization in Matlab. http://​www.​cs.​ubc.​ca/​~schmidtm/​Software/​minFunc.​html, 2005.
 
Literature
2.
go back to reference Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. Neural information processing system foundation, Vancouver, pp 153–160 Bengio Y, Lamblin P, Popovici D, Larochelle H (2007) Greedy layer-wise training of deep networks. Neural information processing system foundation, Vancouver, pp 153–160
3.
go back to reference Vincent P, Larochelle H, Bengio Y et al (2008) Extracting and composing robust features with denoising autoencoders. ICML’08. ACM, New York, pp 1096–1103 Vincent P, Larochelle H, Bengio Y et al (2008) Extracting and composing robust features with denoising autoencoders. ICML’08. ACM, New York, pp 1096–1103
4.
go back to reference Baldi P (2012) Autoencoders, unsupervised learning, and deep architectures. ICML Unsuperv Transf Learn 12:37–50 Baldi P (2012) Autoencoders, unsupervised learning, and deep architectures. ICML Unsuperv Transf Learn 12:37–50
5.
go back to reference Almousli H, Vincent P (2013) Semi supervised autoencoders: better focusing model capacity during feature extraction. In: Neural Information Processing. Springer, Berlin, Heidelberg, Germany, pp 328–335CrossRef Almousli H, Vincent P (2013) Semi supervised autoencoders: better focusing model capacity during feature extraction. In: Neural Information Processing. Springer, Berlin, Heidelberg, Germany, pp 328–335CrossRef
7.
go back to reference Luo Y, Wan Y (2013) A novel efficient method for training sparse auto-encoders. IEEE International Congress on Image and Signal Processing. Hangzhou, China, pp 1019–1023 Luo Y, Wan Y (2013) A novel efficient method for training sparse auto-encoders. IEEE International Congress on Image and Signal Processing. Hangzhou, China, pp 1019–1023
8.
go back to reference Wei W, Yan H, Yizhou W et al (2014) Generalized autoencoder: a neural network framework for dimensionality reduction. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops Columbus. OH, USA, pp 496–503 Wei W, Yan H, Yizhou W et al (2014) Generalized autoencoder: a neural network framework for dimensionality reduction. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops Columbus. OH, USA, pp 496–503
9.
go back to reference Chandra B, Sharma RK (2014) Adaptive noise schedule for denoising autoencoder. In: Neural Information Processing, Springer, Switzerland, pp 535–542 Chandra B, Sharma RK (2014) Adaptive noise schedule for denoising autoencoder. In: Neural Information Processing, Springer, Switzerland, pp 535–542
10.
go back to reference Liu J, Chi G, Liu Z et al (2013) Predicting protein structural classes with autoencoder neural networks. In: 25th Chinese Control and Decision Conference (CCDC2013). Guiyang, pp 1894–1899 Liu J, Chi G, Liu Z et al (2013) Predicting protein structural classes with autoencoder neural networks. In: 25th Chinese Control and Decision Conference (CCDC2013). Guiyang, pp 1894–1899
11.
go back to reference Ouyang Y, Liu W, Rong W et al (2014) Autoencoder-based collaborative filtering. In: Neural Information Processing. Springer, Switzerland, pp 284–291 Ouyang Y, Liu W, Rong W et al (2014) Autoencoder-based collaborative filtering. In: Neural Information Processing. Springer, Switzerland, pp 284–291
12.
go back to reference Tan CC, Eswaran C (2008) Reconstruction of handwritten digit images using autoencoder neural networks. In: International Conference on Electrical & Computer Engineering. Niagara Falls, pp 442–446 Tan CC, Eswaran C (2008) Reconstruction of handwritten digit images using autoencoder neural networks. In: International Conference on Electrical & Computer Engineering. Niagara Falls, pp 442–446
13.
go back to reference Krizhevsky A, Hinton GE (2011) Using very deep autoencoders for content-based image retrieval. In: 2011 European Symposium on Artificial Neural Networks. Bruges, Belgium, pp 27–29 Krizhevsky A, Hinton GE (2011) Using very deep autoencoders for content-based image retrieval. In: 2011 European Symposium on Artificial Neural Networks. Bruges, Belgium, pp 27–29
14.
go back to reference Xia B, Bao C (2014) Wiener filtering based speech enhancement with weighted denoising auto-encoder and noise classification. Speech Commun 60:13–29CrossRef Xia B, Bao C (2014) Wiener filtering based speech enhancement with weighted denoising auto-encoder and noise classification. Speech Commun 60:13–29CrossRef
15.
go back to reference You Q, Zhang Y-J (2013) A new training principle for stacked denoising autoencoders. In: Seventh International Conference on Image and Graphics. Qingdao, pp 384–389 You Q, Zhang Y-J (2013) A new training principle for stacked denoising autoencoders. In: Seventh International Conference on Image and Graphics. Qingdao, pp 384–389
16.
go back to reference Wu K, Gao Z, Peng C, Wen X (2013) Text window denoising autoencoder: building deep architecture for Chinese word segmentation. Commun Comput Inf Sci 400:1–12 Wu K, Gao Z, Peng C, Wen X (2013) Text window denoising autoencoder: building deep architecture for Chinese word segmentation. Commun Comput Inf Sci 400:1–12
17.
go back to reference Zheng Y, Jeon B, Xu D et al (2015) Image segmentation by generalized hierarchical fuzzy C-means algorithm. J Intell Fuzzy Syst 28:961–973 Zheng Y, Jeon B, Xu D et al (2015) Image segmentation by generalized hierarchical fuzzy C-means algorithm. J Intell Fuzzy Syst 28:961–973
18.
go back to reference Ding S, Zhang J, Jia H, Qian J (2015) An adaptive density data stream clustering algorithm. Cognit Comput 8:30–38CrossRef Ding S, Zhang J, Jia H, Qian J (2015) An adaptive density data stream clustering algorithm. Cognit Comput 8:30–38CrossRef
19.
go back to reference Gu B, Sun X, Sheng VS (2016) Structural minimax probability machine. IEEE Trans Neural Netw Learn Syst 1–11 Gu B, Sun X, Sheng VS (2016) Structural minimax probability machine. IEEE Trans Neural Netw Learn Syst 1–11
20.
go back to reference Lu S, Wang X, Zhang G, Zhou X (2015) Effective algorithms of the Moore–Penrose inverse matrices for extreme learning machine. Intell Data Anal 19:743–760CrossRef Lu S, Wang X, Zhang G, Zhou X (2015) Effective algorithms of the Moore–Penrose inverse matrices for extreme learning machine. Intell Data Anal 19:743–760CrossRef
21.
go back to reference LeCun Y, Ranzato MA, Poultney C et al (2006) Efficient learning of sparse representations with an energy-based model. Nips 1:1137–1144 LeCun Y, Ranzato MA, Poultney C et al (2006) Efficient learning of sparse representations with an energy-based model. Nips 1:1137–1144
22.
go back to reference Lee H, Ekanadham C, Ng AY (2009) Sparse deep belief net model for visual area V2. In: 21st Annu. Conf. Neural Inf. Process. Syst. 873–880 Lee H, Ekanadham C, Ng AY (2009) Sparse deep belief net model for visual area V2. In: 21st Annu. Conf. Neural Inf. Process. Syst. 873–880
23.
go back to reference Xi-Zhao W, Qing-Yan S, Qing M, Jun-Hai Z (2013) Architecture selection for networks trained with extreme learning machine using localized generalization error model. Neurocomputing 102:3–9CrossRef Xi-Zhao W, Qing-Yan S, Qing M, Jun-Hai Z (2013) Architecture selection for networks trained with extreme learning machine using localized generalization error model. Neurocomputing 102:3–9CrossRef
24.
go back to reference Wang X, Chen A, Feng H (2011) Upper integral network with extreme learning mechanism. Neurocomputing 74:2520–2525CrossRef Wang X, Chen A, Feng H (2011) Upper integral network with extreme learning mechanism. Neurocomputing 74:2520–2525CrossRef
25.
go back to reference You Z-H, Lei Y-K, Zhu L et al (2013) Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis. BMC Bioinf 14(Suppl 8):S10CrossRef You Z-H, Lei Y-K, Zhu L et al (2013) Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis. BMC Bioinf 14(Suppl 8):S10CrossRef
27.
go back to reference Vincent P, Larochelle H, Lajoie I et al (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11:3371–3408MathSciNetMATH Vincent P, Larochelle H, Lajoie I et al (2010) Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res 11:3371–3408MathSciNetMATH
28.
go back to reference Boureau Y, Lecun Y, Ranzato MA (2007) Sparse feature learning for deep belief networks. Adv Neural Inf Process Syst 20:1–8 Boureau Y, Lecun Y, Ranzato MA (2007) Sparse feature learning for deep belief networks. Adv Neural Inf Process Syst 20:1–8
29.
go back to reference Lee H, Ekanadham C, Ng AY (2008) Sparse deep belief net model for visual area V2. In: Adv. Neural Inf. Process. Syst. 20. Curran Associates Inc., Computer Science Department, Stanford University, Stanford, CA 94305, United States, pp 873–880 Lee H, Ekanadham C, Ng AY (2008) Sparse deep belief net model for visual area V2. In: Adv. Neural Inf. Process. Syst. 20. Curran Associates Inc., Computer Science Department, Stanford University, Stanford, CA 94305, United States, pp 873–880
31.
go back to reference Ng AY (2004) Feature selection, L 1 vs. L 2 regularization, and rotational invariance. Proceedings of the twenty-first international conference on Machine learning. In: Proc. twenty-first Int. Conf. Mach. Learn. pp 379–387 Ng AY (2004) Feature selection, L 1 vs. L 2 regularization, and rotational invariance. Proceedings of the twenty-first international conference on Machine learning. In: Proc. twenty-first Int. Conf. Mach. Learn. pp 379–387
32.
go back to reference LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2323CrossRef LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2323CrossRef
33.
go back to reference Coates A, Arbor A, Ng AY (2011) An analysis of single-layer networks in unsupervised feature learning. Aistats 2011:215–223 Coates A, Arbor A, Ng AY (2011) An analysis of single-layer networks in unsupervised feature learning. Aistats 2011:215–223
Metadata
Title
Research on denoising sparse autoencoder
Authors
Lingheng Meng
Shifei Ding
Yu Xue
Publication date
31-05-2016
Publisher
Springer Berlin Heidelberg
Published in
International Journal of Machine Learning and Cybernetics / Issue 5/2017
Print ISSN: 1868-8071
Electronic ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-016-0550-y

Other articles of this Issue 5/2017

International Journal of Machine Learning and Cybernetics 5/2017 Go to the issue