Skip to main content

2020 | OriginalPaper | Buchkapitel

Unsupervisedly Learned Representations – Should the Quest Be Over?

verfasst von : Daniel N. Nissani (Nissensohn)

Erschienen in: Machine Learning, Optimization, and Data Science

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

After four decades of research there still exists a Classification accuracy gap of about 20% between our best Unsupervisedly Learned Representations methods and the accuracy rates achieved by intelligent animals. It thus may well be that we are looking in the wrong direction. A possible solution to this puzzle is presented. We demonstrate that Reinforcement Learning can learn representations which achieve the same accuracy as that of animals. Our main modest contribution lies in the observations that: a. when applied to a real world environment Reinforcement Learning does not require labels, and thus may be legitimately considered as Unsupervised Learning, and b. in contrast, when Reinforcement Learning is applied in a simulated environment it does inherently require labels and should thus be generally be considered as Supervised Learning. The corollary of these observations is that further search for Unsupervised Learning competitive paradigms which may be trained in simulated environments may be futile.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
See e.g. Yann LeCun Interview, IEEE Spectrum, Feb. 2015: “…The bottom line is that the brain is much better than our model at doing unsupervised learning…”.
 
2
So that, like Moses on Mount Nebo, we may be able to see Promised Land but will never reach it (Deuteronomy 34:4).
 
4
Matlab code of this simple setup will be available upon request from the Author.
 
5
We could consider in principle the case wherein the Environment predicts by itself a label from the Dog’s photograph; this however requires the Environment to be a trained classifier, which enters us into an endless loop.
 
Literatur
Zurück zum Zitat Arora, S., Bhaskara, A., Ge, R., Ma, T.: Provable bounds for learning some deep representations. In: ICML 2014 (2014) Arora, S., Bhaskara, A., Ge, R., Ma, T.: Provable bounds for learning some deep representations. In: ICML 2014 (2014)
Zurück zum Zitat Bengio, Y., et al.: Deep learners benefit more from out-of-distribution examples. In: AISTATS 2011 (2011) Bengio, Y., et al.: Deep learners benefit more from out-of-distribution examples. In: AISTATS 2011 (2011)
Zurück zum Zitat Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: NIPS 2007 (2007) Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: NIPS 2007 (2007)
Zurück zum Zitat Bengio, Y.: Deep Learning of Representations for Unsupervised and Transfer Learning, JMLR 2012 Bengio, Y.: Deep Learning of Representations for Unsupervised and Transfer Learning, JMLR 2012
Zurück zum Zitat Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006) Bishop, C.M.: Pattern Recognition and Machine Learning. Springer, New York (2006)
Zurück zum Zitat Coates, A., Lee, H., Ng, A.Y.: An analysis of single-layer networks in unsupervised feature learning. In: AISTATS 2011 (2011) Coates, A., Lee, H., Ng, A.Y.: An analysis of single-layer networks in unsupervised feature learning. In: AISTATS 2011 (2011)
Zurück zum Zitat Donahue, J., Krahenbuhl, P., Darrell, T.: Adversarial feature learning. arXiv;1605.09782 (2017) Donahue, J., Krahenbuhl, P., Darrell, T.: Adversarial feature learning. arXiv;1605.09782 (2017)
Zurück zum Zitat Erhan, D., Bengio, Y., Courville, A., Manzagol, P., Vincent, P.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010)MathSciNetMATH Erhan, D., Bengio, Y., Courville, A., Manzagol, P., Vincent, P.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 11, 625–660 (2010)MathSciNetMATH
Zurück zum Zitat Hessel, M.: Rainbow: combining improvements in deep reinforcement learning. In: AAAI 2018 (2018) Hessel, M.: Rainbow: combining improvements in deep reinforcement learning. In: AAAI 2018 (2018)
Zurück zum Zitat Hinton, G.E., Dayan, P., Frey, B.J., Neal, R.M.: The wake-sleep algorithm for unsupervised neural networks. Science 268(5214), 1158–1161 (1995)CrossRef Hinton, G.E., Dayan, P., Frey, B.J., Neal, R.M.: The wake-sleep algorithm for unsupervised neural networks. Science 268(5214), 1158–1161 (1995)CrossRef
Zurück zum Zitat Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 1, 1–40 (2009)MATH Larochelle, H., Bengio, Y., Louradour, J., Lamblin, P.: Exploring strategies for training deep neural networks. J. Mach. Learn. Res. 1, 1–40 (2009)MATH
Zurück zum Zitat Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision (1999) Lowe, D.G.: Object recognition from local scale-invariant features. In: Proceedings of the International Conference on Computer Vision (1999)
Zurück zum Zitat Mesnil, G., et al.: Unsupervised and transfer learning challenge: a deep learning approach. In: JMLR 2011 (2011) Mesnil, G., et al.: Unsupervised and transfer learning challenge: a deep learning approach. In: JMLR 2011 (2011)
Zurück zum Zitat Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–9 (1996) Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–9 (1996)
Zurück zum Zitat Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T.: Context encoders: feature learning by inpainting. In: IEEE CVPR 2016 (2016) Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T.: Context encoders: feature learning by inpainting. In: IEEE CVPR 2016 (2016)
Zurück zum Zitat Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial network. In: ICLR 2016 (2016) Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial network. In: ICLR 2016 (2016)
Zurück zum Zitat Ranzato, M.A., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: NIPS 2006 (2006) Ranzato, M.A., Poultney, C., Chopra, S., LeCun, Y.: Efficient learning of sparse representations with an energy-based model. In: NIPS 2006 (2006)
Zurück zum Zitat Ranzato, M.A., Huang, F., Boureau, Y., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: IEEE CVPR 2007 (2007) Ranzato, M.A., Huang, F., Boureau, Y., LeCun, Y.: Unsupervised learning of invariant feature hierarchies with applications to object recognition. In: IEEE CVPR 2007 (2007)
Zurück zum Zitat Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: ICML 2011 (2011) Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: explicit invariance during feature extraction. In: ICML 2011 (2011)
Zurück zum Zitat Salakhutdinov, R., Hinton, G.: Learning a nonlinear embedding by preserving class neighborhood structure. J. Machi. Learn. Res. (2007) Salakhutdinov, R., Hinton, G.: Learning a nonlinear embedding by preserving class neighborhood structure. J. Machi. Learn. Res. (2007)
Zurück zum Zitat Sanger, T.D.: An optimality principle for unsupervised learning. In: NIPS 1988 (1988) Sanger, T.D.: An optimality principle for unsupervised learning. In: NIPS 1988 (1988)
Zurück zum Zitat Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018) Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)
Zurück zum Zitat Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: NIPS 2000 (2000) Sutton, R.S., McAllester, D., Singh, S., Mansour, Y.: Policy gradient methods for reinforcement learning with function approximation. In: NIPS 2000 (2000)
Zurück zum Zitat Tesauro, G.: TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Comput. 6, 215–219 (1994)CrossRef Tesauro, G.: TD-gammon, a self-teaching backgammon program, achieves master-level play. Neural Comput. 6, 215–219 (1994)CrossRef
Zurück zum Zitat Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. (2010) Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J. Mach. Learn. Res. (2010)
Zurück zum Zitat Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: IEEE ICCV 2015 (2015) Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: IEEE ICCV 2015 (2015)
Zurück zum Zitat Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In; CVPR 2017 (2017) Xie, S., Girshick, R., Dollar, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In; CVPR 2017 (2017)
Zurück zum Zitat Zhang, R., Isola, P., Efros, A.A.: Split-Brain autoencoders: unsupervised learning by cross-channel prediction. In: CVPR 2017 (2017) Zhang, R., Isola, P., Efros, A.A.: Split-Brain autoencoders: unsupervised learning by cross-channel prediction. In: CVPR 2017 (2017)
Metadaten
Titel
Unsupervisedly Learned Representations – Should the Quest Be Over?
verfasst von
Daniel N. Nissani (Nissensohn)
Copyright-Jahr
2020
DOI
https://doi.org/10.1007/978-3-030-64580-9_29

Premium Partner