Skip to main content
Top
Published in: Soft Computing 11/2020

30-05-2019 | Focus

Auto-encoder-based generative models for data augmentation on regression problems

Author: Hiroshi Ohno

Published in: Soft Computing | Issue 11/2020

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Recently, auto-encoder-based generative models have been widely used successfully for image processing. However, there are few studies on the realization of continuous input–output mappings for regression problems. Lack of a sufficient amount of training data plagues regression problems, which is also a notable problem in machine learning, which affects its application in the field of materials science. Using variational auto-encoders (VAEs) as generative models for data augmentation, we address the issue of small data size for regression problems. VAEs are popular and powerful auto-encoder-based generative models. Generative auto-encoder models such as VAEs use multilayer neural networks to generate sample data. In this study, we demonstrate the effectiveness of multi-task learning (auto-encoding and regression tasks) relating to regression problems. We conducted experiments on seven benchmark datasets and on one ionic conductivity dataset as an application in materials science. The experimental results show that the multi-task learning for VAEs improved the generalization performance of multivariable linear regression model trained with augmented data.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literature
go back to reference Abu Arqub O, AL-Smadi M, Momani S, Hayat T (2016) Numerical solutions of fuzzy differential equations using reproducing kernel Hilbert space method. Soft Comput 20(8):3283–3302CrossRef Abu Arqub O, AL-Smadi M, Momani S, Hayat T (2016) Numerical solutions of fuzzy differential equations using reproducing kernel Hilbert space method. Soft Comput 20(8):3283–3302CrossRef
go back to reference Abu Arqub O, Al-Smadi M, Momani S, Hayat T (2017) Application of reproducing kernel algorithm for solving second-order, two-point fuzzy boundary value problems. Soft Comput 21(23):7191–7206CrossRef Abu Arqub O, Al-Smadi M, Momani S, Hayat T (2017) Application of reproducing kernel algorithm for solving second-order, two-point fuzzy boundary value problems. Soft Comput 21(23):7191–7206CrossRef
go back to reference Alain G, Bengio Y (2014) What regularized auto-encoders learn from the data-generating distribution. J Mach Learn Res 15:3563–3593MathSciNetMATH Alain G, Bengio Y (2014) What regularized auto-encoders learn from the data-generating distribution. J Mach Learn Res 15:3563–3593MathSciNetMATH
go back to reference An G (1996) The effects of adding noise during backpropagation training on a generalization performance. Neural Comput 8(3):643–674MathSciNetCrossRef An G (1996) The effects of adding noise during backpropagation training on a generalization performance. Neural Comput 8(3):643–674MathSciNetCrossRef
go back to reference Bengio Y (2012) Practical recommendations for gradient-based training of deep architectures. Springer, Berlin, pp 437–478 Bengio Y (2012) Practical recommendations for gradient-based training of deep architectures. Springer, Berlin, pp 437–478
go back to reference Bengio Y, Alain G, Rifai S (2012) Implicit density estimation by local moment matching to sample from auto-encoders. Technical Report, Université de Montréal. Arxiv report arXiv:1207.0057 Bengio Y, Alain G, Rifai S (2012) Implicit density estimation by local moment matching to sample from auto-encoders. Technical Report, Université de Montréal. Arxiv report arXiv:​1207.​0057
go back to reference Bengio Y, Mesnil G, Dauphin Y, Rifai S (2013a) Better mixing via deep representations. In: Proceedings of the 30th international conference on machine learning (ICML’13) Bengio Y, Mesnil G, Dauphin Y, Rifai S (2013a) Better mixing via deep representations. In: Proceedings of the 30th international conference on machine learning (ICML’13)
go back to reference Bengio Y, Yao L, Alain G, Vincent P (2013b) Generalized denoising auto-encoders as generative models. In: Advances in neural information processing systems, vol 26 (NIPS 2013), pp 899–907 Bengio Y, Yao L, Alain G, Vincent P (2013b) Generalized denoising auto-encoders as generative models. In: Advances in neural information processing systems, vol 26 (NIPS 2013), pp 899–907
go back to reference Bengio Y, Thibodeau-Laufer E, Yosinski J, Alain G (2014) Deep generative stochastic networks trainable by backprop. In: Proceedings of the thirty-one international conference on machine learning (ICML’14) Bengio Y, Thibodeau-Laufer E, Yosinski J, Alain G (2014) Deep generative stochastic networks trainable by backprop. In: Proceedings of the thirty-one international conference on machine learning (ICML’14)
go back to reference Bishop CM (1995) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7(1):108–116CrossRef Bishop CM (1995) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7(1):108–116CrossRef
go back to reference Blöchl PE (1994) Projector augmented-wave method. Phys Rev B 50:17,953–17,979CrossRef Blöchl PE (1994) Projector augmented-wave method. Phys Rev B 50:17,953–17,979CrossRef
go back to reference Denton EL, Chintala S, Szlam A, Fergus R (2015) Deep generative image models using a Laplacian pyramid of adversarial networks. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in neural information processing systems, vol 28. Curran Associates, Inc., Red Hook, pp 1486–1494 Denton EL, Chintala S, Szlam A, Fergus R (2015) Deep generative image models using a Laplacian pyramid of adversarial networks. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R (eds) Advances in neural information processing systems, vol 28. Curran Associates, Inc., Red Hook, pp 1486–1494
go back to reference Desjardins G, Courville A, Bengio Y, Vincent P, Delalleau O (2010) Tempered Markov chain Monte Carlo for training of restricted Boltzmann machines. In: Proceedings of the 13th international conference on artificial intelligence and statistics, vol 9, pp 145–152 Desjardins G, Courville A, Bengio Y, Vincent P, Delalleau O (2010) Tempered Markov chain Monte Carlo for training of restricted Boltzmann machines. In: Proceedings of the 13th international conference on artificial intelligence and statistics, vol 9, pp 145–152
go back to reference Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12:2121–2159MathSciNetMATH Duchi J, Hazan E, Singer Y (2011) Adaptive subgradient methods for online learning and stochastic optimization. J Mach Learn Res 12:2121–2159MathSciNetMATH
go back to reference Gan Z, Henao R, Carlson D, Carin L (2015) Learning deep sigmoid belief networks with data augmentation. In: Lebanon G, Vishwanathan SVN (eds) Proceedings of the eighteenth international conference on artificial intelligence and statistics, PMLR, proceedings of machine learning research, San Diego, vol 38, pp 268–276 Gan Z, Henao R, Carlson D, Carin L (2015) Learning deep sigmoid belief networks with data augmentation. In: Lebanon G, Vishwanathan SVN (eds) Proceedings of the eighteenth international conference on artificial intelligence and statistics, PMLR, proceedings of machine learning research, San Diego, vol 38, pp 268–276
go back to reference Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in neural information processing systems, vol 27. Curran Associates, Inc., Red Hook, pp 2672–2680 Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in neural information processing systems, vol 27. Curran Associates, Inc., Red Hook, pp 2672–2680
go back to reference Grandvalet Y, Bengio Y (2004) Semi-supervised learning by entropy minimization. In: Saul LK, Weiss Y, Bottou L (eds) Proceedings of the 17th international conference on neural information processing systems, NIPS’04. MIT Press, Cambridge, pp 529–536 Grandvalet Y, Bengio Y (2004) Semi-supervised learning by entropy minimization. In: Saul LK, Weiss Y, Bottou L (eds) Proceedings of the 17th international conference on neural information processing systems, NIPS’04. MIT Press, Cambridge, pp 529–536
go back to reference Guimaraes GL, Sanchez-Lengeling B, Farias PLC, Aspuru-Guzik A (2017) Objective-reinforced generative adversarial networks (ORGAN) for sequence generation models. CoRR arXiv:1705.10843 Guimaraes GL, Sanchez-Lengeling B, Farias PLC, Aspuru-Guzik A (2017) Objective-reinforced generative adversarial networks (ORGAN) for sequence generation models. CoRR arXiv:​1705.​10843
go back to reference Huang C, Touati A, Dinh L, Drozdzal M, Havaei M, Charlin L, Courville AC (2017) Learnable explicit density for continuous latent space and variational inference. CoRR arXiv:1710.02248 Huang C, Touati A, Dinh L, Drozdzal M, Havaei M, Charlin L, Courville AC (2017) Learnable explicit density for continuous latent space and variational inference. CoRR arXiv:​1710.​02248
go back to reference Kawaguchi K (2016) Deep learning without poor local minima. In: Lee DD, Sugiyama M, Luxburg UV, Guyon I, Garnett R (eds) Advances in neural information processing systems, vol 29. Curran Associates, Inc., Red Hook, pp 586–594 Kawaguchi K (2016) Deep learning without poor local minima. In: Lee DD, Sugiyama M, Luxburg UV, Guyon I, Garnett R (eds) Advances in neural information processing systems, vol 29. Curran Associates, Inc., Red Hook, pp 586–594
go back to reference Kingma DP, Welling M (2013) Auto-encoding variational Bayes. In: Proceedings of the 2nd international conference on learning representation Kingma DP, Welling M (2013) Auto-encoding variational Bayes. In: Proceedings of the 2nd international conference on learning representation
go back to reference Kresse G, Furthmüller J (1996) Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys Rev B 54:11,169–11,186CrossRef Kresse G, Furthmüller J (1996) Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. Phys Rev B 54:11,169–11,186CrossRef
go back to reference Minka T (2005) Divergence measures and message passing. Technical Report, MSR-TR-2005-173 Minka T (2005) Divergence measures and message passing. Technical Report, MSR-TR-2005-173
go back to reference Neal RM (1996) Sampling from multimodal distributions using tempered transitions. Stat Comput 6(4):353–366CrossRef Neal RM (1996) Sampling from multimodal distributions using tempered transitions. Stat Comput 6(4):353–366CrossRef
go back to reference Nguyen A, Dosovitskiy A, Yosinski J, Brox T, Clune J (2016) Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Lee DD, Sugiyama M, Luxburg UV, Guyon I, Garnett R (eds) Proceedings of the 30th international conference on neural information processing systems, NIPS’16. Curran Associates Inc., Red Hook, pp 3395–3403 Nguyen A, Dosovitskiy A, Yosinski J, Brox T, Clune J (2016) Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Lee DD, Sugiyama M, Luxburg UV, Guyon I, Garnett R (eds) Proceedings of the 30th international conference on neural information processing systems, NIPS’16. Curran Associates Inc., Red Hook, pp 3395–3403
go back to reference Nguyen A, Clune J, Bengio Y, Dosovitskiy A, Yosinski J (2017) Plug play generative networks: Conditional iterative generation of images in latent space. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 3510–3520 Nguyen A, Clune J, Bengio Y, Dosovitskiy A, Yosinski J (2017) Plug play generative networks: Conditional iterative generation of images in latent space. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 3510–3520
go back to reference Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR arXiv:1511.06434 Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR arXiv:​1511.​06434
go back to reference Ramprasad R, Batra R, Pilania G, Mannodi-Kanakkithodi A, Kim C (2017) Machine learning in materials informatics: recent applications and prospects. npj Comput Mater 3(1):54CrossRef Ramprasad R, Batra R, Pilania G, Mannodi-Kanakkithodi A, Kim C (2017) Machine learning in materials informatics: recent applications and prospects. npj Comput Mater 3(1):54CrossRef
go back to reference Rezende DJ, Mohamed S (2015) Variational inference with normalizing flows. In: Bach FR, Blei DM (eds) ICML, JMLR.org, JMLR workshop and conference proceedings, vol 37, pp 1530–1538 Rezende DJ, Mohamed S (2015) Variational inference with normalizing flows. In: Bach FR, Blei DM (eds) ICML, JMLR.org, JMLR workshop and conference proceedings, vol 37, pp 1530–1538
go back to reference Rezende DJ, Mohamed S, Wierstra D (2014) Stochastic backpropagation and approximate inference in deep generative models. In: Xing EP, Jebara T (eds) Proceedings of the 31st international conference on machine learning, PMLR, proceedings of machine learning research, vol 32. PMLR, Beijing, China, pp 1278–1286 Rezende DJ, Mohamed S, Wierstra D (2014) Stochastic backpropagation and approximate inference in deep generative models. In: Xing EP, Jebara T (eds) Proceedings of the 31st international conference on machine learning, PMLR, proceedings of machine learning research, vol 32. PMLR, Beijing, China, pp 1278–1286
go back to reference Rifai S, Dauphin YN, Vincent P, Bengio Y, Muller X (2011) The manifold tangent classifier. In: Shawe-Taylor J, Zemel RS, Bartlett PL, Pereira F, Weinberger KQ (eds) Advances in neural information processing systems, vol 24. Curran Associates Inc., Red Hook, pp 2294–2302 Rifai S, Dauphin YN, Vincent P, Bengio Y, Muller X (2011) The manifold tangent classifier. In: Shawe-Taylor J, Zemel RS, Bartlett PL, Pereira F, Weinberger KQ (eds) Advances in neural information processing systems, vol 24. Curran Associates Inc., Red Hook, pp 2294–2302
go back to reference Rifai S, Bengio Y, Dauphin Y, Vincent P (2012) A generative process for sampling contractive auto-encoders. In: Proceedings of the twenty-nine international conference on machine learning (ICML’12) Rifai S, Bengio Y, Dauphin Y, Vincent P (2012) A generative process for sampling contractive auto-encoders. In: Proceedings of the twenty-nine international conference on machine learning (ICML’12)
go back to reference Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958MathSciNetMATH Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958MathSciNetMATH
go back to reference Theis L, van den Oord A, Bethge M (2016) A note on the evaluation of generative models. In: International conference on learning representations Theis L, van den Oord A, Bethge M (2016) A note on the evaluation of generative models. In: International conference on learning representations
go back to reference Wu Y, Burda Y, Salakhutdinov R, Grosse RB (2016) On the quantitative analysis of decoder-based generative models. CoRR arXiv:1611.04273 Wu Y, Burda Y, Salakhutdinov R, Grosse RB (2016) On the quantitative analysis of decoder-based generative models. CoRR arXiv:​1611.​04273
go back to reference Zhang Y, Ling C (2018) A strategy to apply machine learning to small datasets in materials science. npj Comput Mater 4(1):25CrossRef Zhang Y, Ling C (2018) A strategy to apply machine learning to small datasets in materials science. npj Comput Mater 4(1):25CrossRef
go back to reference Zhu JY, Krähenbühl P, Shechtman E, Efros AA (2016) Generative visual manipulation on the natural image manifold. In: Proceedings of European conference on computer vision (ECCV) Zhu JY, Krähenbühl P, Shechtman E, Efros AA (2016) Generative visual manipulation on the natural image manifold. In: Proceedings of European conference on computer vision (ECCV)
Metadata
Title
Auto-encoder-based generative models for data augmentation on regression problems
Author
Hiroshi Ohno
Publication date
30-05-2019
Publisher
Springer Berlin Heidelberg
Published in
Soft Computing / Issue 11/2020
Print ISSN: 1432-7643
Electronic ISSN: 1433-7479
DOI
https://doi.org/10.1007/s00500-019-04094-0

Other articles of this Issue 11/2020

Soft Computing 11/2020 Go to the issue

Premium Partner