Skip to main content
Top

2020 | OriginalPaper | Chapter

Unsupervisedly Learned Representations – Should the Quest Be Over?

Author : Daniel N. Nissani (Nissensohn)

Published in: Machine Learning, Optimization, and Data Science

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

After four decades of research there still exists a Classification accuracy gap of about 20% between our best Unsupervisedly Learned Representations methods and the accuracy rates achieved by intelligent animals. It thus may well be that we are looking in the wrong direction. A possible solution to this puzzle is presented. We demonstrate that Reinforcement Learning can learn representations which achieve the same accuracy as that of animals. Our main modest contribution lies in the observations that: a. when applied to a real world environment Reinforcement Learning does not require labels, and thus may be legitimately considered as Unsupervised Learning, and b. in contrast, when Reinforcement Learning is applied in a simulated environment it does inherently require labels and should thus be generally be considered as Supervised Learning. The corollary of these observations is that further search for Unsupervised Learning competitive paradigms which may be trained in simulated environments may be futile.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
See e.g. Yann LeCun Interview, IEEE Spectrum, Feb. 2015: “…The bottom line is that the brain is much better than our model at doing unsupervised learning…”.
 
2
So that, like Moses on Mount Nebo, we may be able to see Promised Land but will never reach it (Deuteronomy 34:4).
 
4
Matlab code of this simple setup will be available upon request from the Author.
 
5
We could consider in principle the case wherein the Environment predicts by itself a label from the Dog’s photograph; this however requires the Environment to be a trained classifier, which enters us into an endless loop.
 
Literature
go back to reference Arora, S., Bhaskara, A., Ge, R., Ma, T.: Provable bounds for learning some deep representations. In: ICML 2014 (2014) Arora, S., Bhaskara, A., Ge, R., Ma, T.: Provable bounds for learning some deep representations. In: ICML 2014 (2014)
go back to reference Bengio, Y., et al.: Deep learners benefit more from out-of-distribution examples. In: AISTATS 2011 (2011) Bengio, Y., et al.: Deep learners benefit more from out-of-distribution examples. In: AISTATS 2011 (2011)
go back to reference Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: NIPS 2007 (2007) Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: NIPS 2007 (2007)
go back to reference Bengio, Y.: Deep Learning of Representations for Unsupervised and Transfer Learning, JMLR 2012 Bengio, Y.: Deep Learning of Representations for Unsupervised and Transfer Learning, JMLR 2012
go back to reference Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006) Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)
go back to reference Coates, A., Lee, H., Ng, A.Y.: An analysis of single-layer networks in unsupervised feature learning. In: AISTATS 2011 (2011) Coates, A., Lee, H., Ng, A.Y.: An analysis of single-layer networks in unsupervised feature learning. In: AISTATS 2011 (2011)
go back to reference Donahue, J., Krahenbuhl, P., Darrell, T.: Adversarial feature learning. arXiv;1605.09782 (2017) Donahue, J., Krahenbuhl, P., Darrell, T.: Adversarial feature learning. arXiv;1605.09782 (2017)
go back to reference Erhan, D., Bengio, Y., Courville, A., Manzagol, P., Vincent, P.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010)MathSciNetMATH Erhan, D., Bengio, Y., Courville, A., Manzagol, P., Vincent, P.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010)MathSciNetMATH
go back to reference Hessel, M.: Rainbow: combining improvements in deep reinforcement learning. In: AAAI 2018 (2018) Hessel, M.: Rainbow: combining improvements in deep reinforcement learning. In: AAAI 2018 (2018)
go back to reference Hinton, G.E., Dayan, P., Frey, B.J., Neal, R.M.: The wake-sleep algorithm for unsupervised neural networks. Science 268(5214), 1158–1161 (1995)CrossRef Hinton, G.E., Dayan, P., Frey, B.J., Neal, R.M.: The wake-sleep algorithm for unsupervised neural networks. Science 268(5214), 1158–1161 (1995)CrossRef
go back to reference Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 1, 1–40 (2009)MATH Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 1, 1–40 (2009)MATH
go back to reference Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision (1999) Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision (1999)
go back to reference Mesnil, G., et al.: Unsupervised and transfer learning challenge: a deep learning approach. In: JMLR 2011 (2011) Mesnil, G., et al.: Unsupervised and transfer learning challenge: a deep learning approach. In: JMLR 2011 (2011)
go back to reference Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–9 (1996) Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–9 (1996)
go back to reference Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T.: Context encoders: feature learning by inpainting. In: IEEE CVPR 2016 (2016) Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T.: Context encoders: feature learning by inpainting. In: IEEE CVPR 2016 (2016)
go back to reference Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial network. In: ICLR 2016 (2016) Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial network. In: ICLR 2016 (2016)
go back to reference Ranzato, M.A., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: NIPS 2006 (2006) Ranzato, M.A., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: NIPS 2006 (2006)
go back to reference Ranzato, M.A., Huang, F., Boureau, Y., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: IEEE CVPR 2007 (2007) Ranzato, M.A., Huang, F., Boureau, Y., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: IEEE CVPR 2007 (2007)
go back to reference Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: ICML 2011 (2011) Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: ICML 2011 (2011)
go back to reference Salakhutdinov, R., Hinton, G.: Learning a nonlinear embedding by preserving class neighborhood structure. J. Machi. Learn. Res. (2007) Salakhutdinov, R., Hinton, G.: Learning a nonlinear embedding by preserving class neighborhood structure. J. Machi. Learn. Res. (2007)
go back to reference Sanger, T.D.: An optimality principle for unsupervised learning. In: NIPS 1988 (1988) Sanger, T.D.: An optimality principle for unsupervised learning. In: NIPS 1988 (1988)
go back to reference Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018) Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)
go back to reference Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: NIPS 2000 (2000) Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: NIPS 2000 (2000)
go back to reference Tesauro, G.: TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Comput. 6, 215–219 (1994)CrossRef Tesauro, G.: TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Comput. 6, 215–219 (1994)CrossRef
go back to reference Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. (2010) Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. (2010)
go back to reference Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: IEEE ICCV 2015 (2015) Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: IEEE ICCV 2015 (2015)
go back to reference Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In; CVPR 2017 (2017) Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In; CVPR 2017 (2017)
go back to reference Zhang, R., Isola, P., Efros, A.A.: Split-Brain autoencoders: unsupervised learning by cross-channel prediction. In: CVPR 2017 (2017) Zhang, R., Isola, P., Efros, A.A.: Split-Brain autoencoders: unsupervised learning by cross-channel prediction. In: CVPR 2017 (2017)
Metadata
Title
Unsupervisedly Learned Representations – Should the Quest Be Over?
Author
Daniel N. Nissani (Nissensohn)
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-64580-9_29

Premium Partner