Skip to main content
Top
Published in: Progress in Artificial Intelligence 3/2020

16-07-2020 | Regular Paper

DMRAE: discriminative manifold regularized auto-encoder for sparse and robust feature learning

Authors: Nima Farajian, Peyman Adibi

Published in: Progress in Artificial Intelligence | Issue 3/2020

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Although the regularized over-complete auto-encoders have shown great ability to extract meaningful representation from data and reveal the underlying manifold of them, their unsupervised learning nature prevents the consideration of class distinction in the representations. The present study aimed to learn sparse, robust, and discriminative features through supervised manifold regularized auto-encoders by preserving locality on the manifold directions around each data and enhancing between-class discrimination. The combination of triplet loss manifold regularization with a novel denoising regularizer is injected to the objective function to generate features which are robust against perpendicular perturbation around data manifold and are sensitive enough to variation along the manifold. Also, the sparsity ratio of the obtained representation is adaptive based on the data distribution. The experimental results on 12 real-world classification problems show that the proposed method has better classification performance in comparison with several recently proposed relevant models.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Bengio, Y., et al.: “Learning deep architectures for ai,” Foundations and trends®. Mach. Learn. 2(1), 1–127 (2009)MATH Bengio, Y., et al.: “Learning deep architectures for ai,” Foundations and trends®. Mach. Learn. 2(1), 1–127 (2009)MATH
2.
go back to reference Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRef Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)CrossRef
3.
go back to reference Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetMATH Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)MathSciNetMATH
4.
go back to reference Alain, G., Bengio, Y.: What regularized auto-encoders learn from the data-generating distribution. J. Mach. Learn. Res. 15(1), 3563–3593 (2014)MathSciNetMATH Alain, G., Bengio, Y.: What regularized auto-encoders learn from the data-generating distribution. J. Mach. Learn. Res. 15(1), 3563–3593 (2014)MathSciNetMATH
5.
go back to reference Huang, R., Liu, C., Li, G., Zhou, J.: Adaptive deep supervised autoencoder based image reconstruction for face recognition. Math. Probl. Eng. 2016, (2016) Huang, R., Liu, C., Li, G., Zhou, J.: Adaptive deep supervised autoencoder based image reconstruction for face recognition. Math. Probl. Eng. 2016, (2016)
6.
go back to reference Chechik, G., Sharma, V., Shalit, U., Bengio, S.: Large scale online learning of image similarity through ranking. J. Mach. Learn. Res. 11(Mar), 1109–1135 (2010)MathSciNetMATH Chechik, G., Sharma, V., Shalit, U., Bengio, S.: Large scale online learning of image similarity through ranking. J. Mach. Learn. Res. 11(Mar), 1109–1135 (2010)MathSciNetMATH
7.
go back to reference Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM (2008) Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.-A.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th International Conference on Machine Learning, pp. 1096–1103. ACM (2008)
8.
go back to reference Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59(4–5), 291–294 (1988)MathSciNetMATH Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biol. Cybern. 59(4–5), 291–294 (1988)MathSciNetMATH
9.
go back to reference Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533 (1986)MATH Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533 (1986)MATH
10.
go back to reference Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification and Scene Analysis. Wiley, New York (1973)MATH Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification and Scene Analysis. Wiley, New York (1973)MATH
11.
go back to reference Charte, D., Charte, F., García, S., del Jesus, M.J., Herrera, F.: A practical tutorial on autoencoders for nonlinear feature fusion: taxonomy, models, software and guidelines. Information Fusion 44, 78–96 (2018) Charte, D., Charte, F., García, S., del Jesus, M.J., Herrera, F.: A practical tutorial on autoencoders for nonlinear feature fusion: taxonomy, models, software and guidelines. Information Fusion 44, 78–96 (2018)
12.
go back to reference Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 833–840 (2011) Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp. 833–840 (2011)
13.
go back to reference Wang, Y., Yao, H., Zhao, S.: Auto-encoder based dimensionality reduction. Neurocomputing 184, 232–242 (2016) Wang, Y., Yao, H., Zhao, S.: Auto-encoder based dimensionality reduction. Neurocomputing 184, 232–242 (2016)
14.
go back to reference Japkowicz, N., Hanson, S.J., Gluck, M.A.: Nonlinear autoassociation is not equivalent to pca. Neural Comput. 12(3), 531–545 (2000) Japkowicz, N., Hanson, S.J., Gluck, M.A.: Nonlinear autoassociation is not equivalent to pca. Neural Comput. 12(3), 531–545 (2000)
15.
go back to reference Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)MathSciNetMATH Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.-A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. 11, 3371–3408 (2010)MathSciNetMATH
16.
go back to reference Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., Glorot, X.: Higher order contractive auto-encoder. Mach. Learn. Knowl. Discov. Databases 15, 645–660 (2011) Rifai, S., Mesnil, G., Vincent, P., Muller, X., Bengio, Y., Dauphin, Y., Glorot, X.: Higher order contractive auto-encoder. Mach. Learn. Knowl. Discov. Databases 15, 645–660 (2011)
17.
go back to reference Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area v2. In: Advances in Neural Information Processing Systems, pp. 873–880 (2008) Lee, H., Ekanadham, C., Ng, A.Y.: Sparse deep belief net model for visual area v2. In: Advances in Neural Information Processing Systems, pp. 873–880 (2008)
18.
go back to reference Wright, J., Ma, Y., Mairal, J., Sapiro, G., Huang, T. S., Yan, S.: Sparse representation for computer vision and pattern recognition. In: Proceedings of the IEEE, vol. 98, pp. 1031–1044 (2010) Wright, J., Ma, Y., Mairal, J., Sapiro, G., Huang, T. S., Yan, S.: Sparse representation for computer vision and pattern recognition. In: Proceedings of the IEEE, vol. 98, pp. 1031–1044 (2010)
19.
go back to reference Su, S.-Z., Liu, Z.-H., Xu, S.-P., Li, S.-Z., Ji, R.: Sparse auto-encoder based feature learning for human body detection in depth image. Signal Process. 112, 43–52 (2015) Su, S.-Z., Liu, Z.-H., Xu, S.-P., Li, S.-Z., Ji, R.: Sparse auto-encoder based feature learning for human body detection in depth image. Signal Process. 112, 43–52 (2015)
20.
go back to reference Boureau, Y.-l., Cun, Y. L. et al.: Sparse feature learning for deep belief networks. In: Advances in Neural Information Processing Systems, pp. 1185–1192 (2008) Boureau, Y.-l., Cun, Y. L. et al.: Sparse feature learning for deep belief networks. In: Advances in Neural Information Processing Systems, pp. 1185–1192 (2008)
22.
go back to reference Makhzani, A., Frey, B. J.: Winner-take-all autoencoders. In: Advances in Neural Information Processing Systems, pp. 2791–2799 (2015) Makhzani, A., Frey, B. J.: Winner-take-all autoencoders. In: Advances in Neural Information Processing Systems, pp. 2791–2799 (2015)
23.
go back to reference Qi, Y., Wang, Y., Zheng, X., Wu, Z.: Robust feature learning by stacked autoencoder with maximum correntropy criterion. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6716–6720. IEEE (2014) Qi, Y., Wang, Y., Zheng, X., Wu, Z.: Robust feature learning by stacked autoencoder with maximum correntropy criterion. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6716–6720. IEEE (2014)
24.
go back to reference Liu, W., Pokharel, P. P., Principe, J. C.: Correntropy: A localized similarity measure. In: The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 4919–4924. IEEE (2006) Liu, W., Pokharel, P. P., Principe, J. C.: Correntropy: A localized similarity measure. In: The 2006 IEEE International Joint Conference on Neural Network Proceedings, pp. 4919–4924. IEEE (2006)
25.
go back to reference Jia, K., Sun, L., Gao, S., Song, Z., Shi, B.E.: Laplacian auto-encoders: an explicit learning of nonlinear data manifold. Neurocomputing 160, 250–260 (2015) Jia, K., Sun, L., Gao, S., Song, Z., Shi, B.E.: Laplacian auto-encoders: an explicit learning of nonlinear data manifold. Neurocomputing 160, 250–260 (2015)
26.
go back to reference Liu, W., Ma, T., Tao, D., You, J.: Hsae: a Hessian regularized sparse auto-encoders. Neurocomputing 187, 59–65 (2016) Liu, W., Ma, T., Tao, D., You, J.: Hsae: a Hessian regularized sparse auto-encoders. Neurocomputing 187, 59–65 (2016)
27.
go back to reference Shi, Y., Lei, M., Ma, R., Niu, L.: Learning robust auto-encoders with regularizer for linearity and sparsity. IEEE Access 7, 17195–17206 (2019) Shi, Y., Lei, M., Ma, R., Niu, L.: Learning robust auto-encoders with regularizer for linearity and sparsity. IEEE Access 7, 17195–17206 (2019)
28.
go back to reference Gao, S., Zhang, Y., Jia, K., Lu, J., Zhang, Y.: Single sample face recognition via learning deep supervised autoencoders. IEEE Trans. Inf. Forensics Sec. 10(10), 2108–2118 (2015) Gao, S., Zhang, Y., Jia, K., Lu, J., Zhang, Y.: Single sample face recognition via learning deep supervised autoencoders. IEEE Trans. Inf. Forensics Sec. 10(10), 2108–2118 (2015)
29.
go back to reference Xie, J., Fang, Y., Zhu, F., Wong, E.: Deepshape: deep learned shape descriptor for 3d shape matching and retrieval. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1275–1283. IEEE (2015) Xie, J., Fang, Y., Zhu, F., Wong, E.: Deepshape: deep learned shape descriptor for 3d shape matching and retrieval. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1275–1283. IEEE (2015)
30.
go back to reference Liu, W., Ma, T., Xie, Q., Tao, D., Cheng, J.: Lmae: a large margin auto-encoders for classification. Signal Process. 141, 137–143 (2017) Liu, W., Ma, T., Xie, Q., Tao, D., Cheng, J.: Lmae: a large margin auto-encoders for classification. Signal Process. 141, 137–143 (2017)
31.
go back to reference Hu, C., Wu, X.-J., Shu, Z.-Q.: Discriminative feature learning via sparse autoencoders with label consistency constraints. Neural Process. Lett. 50, 1–13 (2018) Hu, C., Wu, X.-J., Shu, Z.-Q.: Discriminative feature learning via sparse autoencoders with label consistency constraints. Neural Process. Lett. 50, 1–13 (2018)
32.
go back to reference Du, F., Zhang, J., Ji, N., Hu, J., Zhang, C.: Discriminative representation learning with supervised auto-encoder. Neural Process. Lett. 49(2), 507–520 (2019) Du, F., Zhang, J., Ji, N., Hu, J., Zhang, C.: Discriminative representation learning with supervised auto-encoder. Neural Process. Lett. 49(2), 507–520 (2019)
33.
go back to reference Rifai, S., Dauphin, Y., Vincent, P., Bengio, Y., Muller, X.: The manifold tangent classifier. In: NIPS, vol. 271, p. 523 (2011) Rifai, S., Dauphin, Y., Vincent, P., Bengio, Y., Muller, X.: The manifold tangent classifier. In: NIPS, vol. 271, p. 523 (2011)
34.
go back to reference Arpit, D., Zhou, Y., Ngo, H., Govindaraju, V.: Why regularized auto-encoders learn sparse representation?, arXiv preprint arXiv:1505.05561 (2015) Arpit, D., Zhou, Y., Ngo, H., Govindaraju, V.: Why regularized auto-encoders learn sparse representation?, arXiv preprint arXiv:​1505.​05561 (2015)
35.
go back to reference Coates, A., Lee, H., Ng, A.Y.: An analysis of single-layer networks in unsupervised feature learning. Ann Arbor 1001(48109), 2 (2010) Coates, A., Lee, H., Ng, A.Y.: An analysis of single-layer networks in unsupervised feature learning. Ann Arbor 1001(48109), 2 (2010)
36.
go back to reference Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249-0256 (2010) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249-0256 (2010)
37.
go back to reference Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT’2010, pp. 177–186. Springer (2010) Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT’2010, pp. 177–186. Springer (2010)
38.
go back to reference LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998) LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
39.
go back to reference Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto (2009) Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto (2009)
40.
go back to reference Pulgar, F.J., Charte, F., Rivera, A.J., del Jesus, M.J.: Choosing the proper autoencoder for feature fusion based on data complexity and classifiers: analysis, tips and guidelines. Inf. Fusion 54, 44–60 (2020) Pulgar, F.J., Charte, F., Rivera, A.J., del Jesus, M.J.: Choosing the proper autoencoder for feature fusion based on data complexity and classifiers: analysis, tips and guidelines. Inf. Fusion 54, 44–60 (2020)
41.
go back to reference Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, arXiv preprint arXiv:1708.07747 (2017) Xiao, H., Rasul, K., Vollgraf, R.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, arXiv preprint arXiv:​1708.​07747 (2017)
43.
go back to reference Jain, A.N., Dietterich, T.G., Lathrop, R.H., Chapman, D., Critchlow, R.E., Bauer, B.E., Webster, T.A., Lozano-Perez, T.: Compass: a shape-based machine learning tool for drug design. J. Comput. Aided Mol. Des. 8(6), 635–652 (1994) Jain, A.N., Dietterich, T.G., Lathrop, R.H., Chapman, D., Critchlow, R.E., Bauer, B.E., Webster, T.A., Lozano-Perez, T.: Compass: a shape-based machine learning tool for drug design. J. Comput. Aided Mol. Des. 8(6), 635–652 (1994)
44.
go back to reference Buscema, M.: Metanet*: the theory of independent judges. Subst. Use Misuse 33(2), 439–461 (1998) Buscema, M.: Metanet*: the theory of independent judges. Subst. Use Misuse 33(2), 439–461 (1998)
45.
go back to reference Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H., et al.: Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 19, 153 (2007) Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H., et al.: Greedy layer-wise training of deep networks. Adv. Neural Inf. Process. Syst. 19, 153 (2007)
Metadata
Title
DMRAE: discriminative manifold regularized auto-encoder for sparse and robust feature learning
Authors
Nima Farajian
Peyman Adibi
Publication date
16-07-2020
Publisher
Springer Berlin Heidelberg
Published in
Progress in Artificial Intelligence / Issue 3/2020
Print ISSN: 2192-6352
Electronic ISSN: 2192-6360
DOI
https://doi.org/10.1007/s13748-020-00211-5

Other articles of this Issue 3/2020

Progress in Artificial Intelligence 3/2020 Go to the issue

Premium Partner