Skip to main content
Top

2016 | OriginalPaper | Chapter

Solving the Vanishing Information Problem with Repeated Potential Mutual Information Maximization

Author : Ryotaro Kamimura

Published in: Neural Information Processing

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The present paper shows how to solve the problem of vanishing information in potential mutual information maximization. We have previously developed a new information-theoretic method called “potential learning” which aims to extract the most important features through simplified information maximization. However, one of the major problems is that the potential effect diminishes considerably in the course of learning and it becomes impossible to take into account the potentiality in learning. To solve this problem, we here introduce repeated information maximization. To enhance the processes of information maximization, the method forces the potentiality to be assimilated in learning every time it becomes ineffective. The method was applied to the on-line article popularity data set to estimate the popularity of articles. To demonstrate the effectiveness of the method, the number of hidden neurons was made excessively large and set to 50. The results show that the potentiality information maximization could increase mutual information even with 50 hidden neurons, and lead to improved generalization performance. In addition, simplified representations could be obtained for better interpretation and generalization.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
The first variable “timedelta” was deleted from the experiment.
 
Literature
1.
go back to reference Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)CrossRef Linsker, R.: Self-organization in a perceptual network. Computer 21(3), 105–117 (1988)CrossRef
2.
go back to reference Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Comput. 1(3), 402–411 (1989)CrossRef Linsker, R.: How to generate ordered maps by maximizing the mutual information between input and output signals. Neural Comput. 1(3), 402–411 (1989)CrossRef
3.
go back to reference Linsker, R.: Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comput. 4(5), 691–702 (1992)CrossRef Linsker, R.: Local synaptic learning rules suffice to maximize mutual information in a linear network. Neural Comput. 4(5), 691–702 (1992)CrossRef
4.
go back to reference Linsker, R.: Improved local learning rule for information maximization and related applications. Neural Netw. 18(3), 261–265 (2005)CrossRefMATH Linsker, R.: Improved local learning rule for information maximization and related applications. Neural Netw. 18(3), 261–265 (2005)CrossRefMATH
5.
6.
go back to reference Barlow, H.B., Kaushal, T.P., Mitchison, G.J.: Finding minimum entropy codes. Neural Comput. 1(3), 412–423 (1989)CrossRef Barlow, H.B., Kaushal, T.P., Mitchison, G.J.: Finding minimum entropy codes. Neural Comput. 1(3), 412–423 (1989)CrossRef
7.
go back to reference Atick, J.J.: Could information theory provide an ecological theory of sensory processing? Netw. Comput. Neural Syst. 3(2), 213–251 (1992)CrossRefMATH Atick, J.J.: Could information theory provide an ecological theory of sensory processing? Netw. Comput. Neural Syst. 3(2), 213–251 (1992)CrossRefMATH
8.
go back to reference Principe, J.C., Xu, D., Fisher, J.: Information theoretic learning. Unsuperv. Adapt. Filter. 1, 265–319 (2000)MATH Principe, J.C., Xu, D., Fisher, J.: Information theoretic learning. Unsuperv. Adapt. Filter. 1, 265–319 (2000)MATH
9.
go back to reference Principe, J.C.: Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. Springer Science & Business Media, New York (2010)CrossRefMATH Principe, J.C.: Information Theoretic Learning: Renyi’s Entropy and Kernel Perspectives. Springer Science & Business Media, New York (2010)CrossRefMATH
10.
go back to reference Nenadic, Z.: Information discriminant analysis: feature extraction with an information-theoretic objective. IEEE Trans. Pattern Anal. Mach. Intell. 29(8), 1394–1407 (2007)CrossRef Nenadic, Z.: Information discriminant analysis: feature extraction with an information-theoretic objective. IEEE Trans. Pattern Anal. Mach. Intell. 29(8), 1394–1407 (2007)CrossRef
11.
go back to reference Torkkola, K.: Nonlinear feature transforms using maximum mutual information. In: Proceedings of International Joint Conference on Neural Networks, IJCNN 2001, vol. 4, pp. 2756–2761. IEEE (2001) Torkkola, K.: Nonlinear feature transforms using maximum mutual information. In: Proceedings of International Joint Conference on Neural Networks, IJCNN 2001, vol. 4, pp. 2756–2761. IEEE (2001)
12.
go back to reference Kamimura, R.: Self-organizing selective potentiality learning to detect important input neurons. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1619–1626. IEEE (2015) Kamimura, R.: Self-organizing selective potentiality learning to detect important input neurons. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1619–1626. IEEE (2015)
13.
go back to reference Kamimura, R., Kitajima, R.: Selective potentiality maximization for input neuron selection in self-organizing maps. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2015) Kamimura, R., Kitajima, R.: Selective potentiality maximization for input neuron selection in self-organizing maps. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2015)
14.
go back to reference Kamimura, R.: Supervised potentiality actualization learning for improving generalization performance. In: Proceedings on the International Conference on Artificial Intelligence (ICAI), p. 616. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp) (2015) Kamimura, R.: Supervised potentiality actualization learning for improving generalization performance. In: Proceedings on the International Conference on Artificial Intelligence (ICAI), p. 616. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp) (2015)
15.
go back to reference Kitajima, R., Kamimura, R.: Simplifying potential learning by supposing maximum and minimum information for improved generalization and interpretation. In: International Conference on Modelling, Identification and Control. IASTED (2015) Kitajima, R., Kamimura, R.: Simplifying potential learning by supposing maximum and minimum information for improved generalization and interpretation. In: International Conference on Modelling, Identification and Control. IASTED (2015)
16.
go back to reference Fernandes, K., Vinagre, P., Cortez, P.: A proactive intelligent decision support system for predicting the popularity of online news. In: Pereira, F., Machado, P., Costa, E., Cardoso, A. (eds.) EPIA 2015. LNCS, vol. 9273, pp. 535–546. Springer, Heidelberg (2015) Fernandes, K., Vinagre, P., Cortez, P.: A proactive intelligent decision support system for predicting the popularity of online news. In: Pereira, F., Machado, P., Costa, E., Cardoso, A. (eds.) EPIA 2015. LNCS, vol. 9273, pp. 535–546. Springer, Heidelberg (2015)
17.
go back to reference Bache, K., Lichman, M.: UCI machine learning repository (2013) Bache, K., Lichman, M.: UCI machine learning repository (2013)
18.
go back to reference Kamimura, R.: Repeated potentiality assimilation: simplifying learning procedures by positive, independent and indirect operation for improving generalization and interpretation. In: Proceedings of IJCNN-2016, Vancouver (2016, in press) Kamimura, R.: Repeated potentiality assimilation: simplifying learning procedures by positive, independent and indirect operation for improving generalization and interpretation. In: Proceedings of IJCNN-2016, Vancouver (2016, in press)
Metadata
Title
Solving the Vanishing Information Problem with Repeated Potential Mutual Information Maximization
Author
Ryotaro Kamimura
Copyright Year
2016
DOI
https://doi.org/10.1007/978-3-319-46681-1_53

Premium Partner