Skip to main content

2016 | OriginalPaper | Buchkapitel

Adaptive Hausdorff Distances and Tangent Distance Adaptation for Transformation Invariant Classification Learning

verfasst von : Sascha Saralajew, David Nebel, Thomas Villmann

Erschienen in: Neural Information Processing

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Tangent distances (TDs) are important concepts for data manifold distance description in machine learning. In this paper we show that the Hausdorff distance is equivalent to the TD for certain conditions. Hence, we prove the metric properties for TDs. Thereafter, we consider those TDs as dissimilarity measure in learning vector quantization (LVQ) for classification learning of class distributions with high variability. Particularly, we integrate the TD in the learning scheme of LVQ to obtain a TD adaption during LVQ learning. The TD approach extends the classical prototype concept to affine subspaces. This leads to a high topological richness compared to prototypes as points in the data space. By the manifold theory of TDs we can ensure that the affine subspaces are aligned in directions of invariant transformations with respect to class discrimination. We demonstrate the superiority of this new approach by two examples.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
The following statements remain also true if we assume that \(\left( \mathbb {M},+\right) \) is additional a group instead of a vector space; \(U_{\mathbf {w}}\) is a subgroup instead of a subspace and \(\mathcal {V}\), \(\mathcal {W}\) are left cosets instead of affine subspaces.
 
Literatur
1.
Zurück zum Zitat Kohonen, T.: Self-Organizing Maps. Springer Series in Information Sciences, vol. 30. Springer, Heidelberg (1995). Second Extended Edition 1997MATH Kohonen, T.: Self-Organizing Maps. Springer Series in Information Sciences, vol. 30. Springer, Heidelberg (1995). Second Extended Edition 1997MATH
2.
Zurück zum Zitat Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)MATH Schölkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)MATH
3.
Zurück zum Zitat Biehl, M., Hammer, B., Schleif, F.-M., Schneider, P., Villmann, T.: Stationarity of matrix relevance LVQ. In: Proceedings of the International Joint Conference on Neural Networks 2015 (IJCNN), pp. 1–8. IEEE Computer Society Press, Los Alamitos (2015) Biehl, M., Hammer, B., Schleif, F.-M., Schneider, P., Villmann, T.: Stationarity of matrix relevance LVQ. In: Proceedings of the International Joint Conference on Neural Networks 2015 (IJCNN), pp. 1–8. IEEE Computer Society Press, Los Alamitos (2015)
4.
Zurück zum Zitat Xu, H., Caramanis, C., Mannor, S.: Robustness and regularization of support vector machines. J. Mach. Learn. Res. 10, 1485–1510 (2009)MathSciNetMATH Xu, H., Caramanis, C., Mannor, S.: Robustness and regularization of support vector machines. J. Mach. Learn. Res. 10, 1485–1510 (2009)MathSciNetMATH
5.
Zurück zum Zitat Decoste, D., Schölkopf, B.: Training invariant support vector machines. Mach. Learn. 46, 161–190 (2002)CrossRefMATH Decoste, D., Schölkopf, B.: Training invariant support vector machines. Mach. Learn. 46, 161–190 (2002)CrossRefMATH
6.
Zurück zum Zitat Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRef Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRef
7.
Zurück zum Zitat Simard, P., LeCun, Y., Denker, J.S.: Efficient pattern recognition using a new transformation distance. In: Hanson, S.J., Cowan, J.D., Giles, C.L. (eds.) Advances in Neural Information Processing Systems 5, pp. 50–58. Morgan-Kaufmann, San Mateo (1993) Simard, P., LeCun, Y., Denker, J.S.: Efficient pattern recognition using a new transformation distance. In: Hanson, S.J., Cowan, J.D., Giles, C.L. (eds.) Advances in Neural Information Processing Systems 5, pp. 50–58. Morgan-Kaufmann, San Mateo (1993)
8.
Zurück zum Zitat Schneider, P., Hammer, B., Biehl, M.: Adaptive relevance matrices in learning vector quantization. Neural Comput. 21, 3532–3561 (2009)MathSciNetCrossRefMATH Schneider, P., Hammer, B., Biehl, M.: Adaptive relevance matrices in learning vector quantization. Neural Comput. 21, 3532–3561 (2009)MathSciNetCrossRefMATH
9.
Zurück zum Zitat Henrikson, J.: Completeness and total boundedness of the Hausdorff metric. MIT Undergrad. J. Math. 1, 69–79 (1999) Henrikson, J.: Completeness and total boundedness of the Hausdorff metric. MIT Undergrad. J. Math. 1, 69–79 (1999)
10.
Zurück zum Zitat Pekalska, E., Duin, R.P.W.: The Dissimilarity Representation for Pattern Recognition: Foundations and Applications. World Scientific, Singapore (2006)MATH Pekalska, E., Duin, R.P.W.: The Dissimilarity Representation for Pattern Recognition: Foundations and Applications. World Scientific, Singapore (2006)MATH
11.
Zurück zum Zitat Villmann, T., Kaden, M., Nebel, D., Bohnsack, A.: Similarities, dissimilarities and types of inner products for data analysis in the context of machine learning. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2016. LNCS (LNAI), vol. 9693, pp. 125–133. Springer, Heidelberg (2016). doi:10.1007/978-3-319-39384-1_11 Villmann, T., Kaden, M., Nebel, D., Bohnsack, A.: Similarities, dissimilarities and types of inner products for data analysis in the context of machine learning. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2016. LNCS (LNAI), vol. 9693, pp. 125–133. Springer, Heidelberg (2016). doi:10.​1007/​978-3-319-39384-1_​11
12.
Zurück zum Zitat Saralajew, S., Villmann, T.: Adaptive tangent distances in generalized learning vector quantization for transformation and distortion invariant classification learning. In: Proceedings of the International Joint Conference on Neural Networks 2016 (IJCNN), pp. 1–8, Vancouver, Canada, (2016) Saralajew, S., Villmann, T.: Adaptive tangent distances in generalized learning vector quantization for transformation and distortion invariant classification learning. In: Proceedings of the International Joint Conference on Neural Networks 2016 (IJCNN), pp. 1–8, Vancouver, Canada, (2016)
13.
Zurück zum Zitat Kohonen, T.: Improved versions of learning vector quantization. In: Proceedings of the IJCNN-90, International Joint Conference on Neural Networks, San Diego, vol. I, pp. 545–550. IEEE Service Center, Piscataway (1990) Kohonen, T.: Improved versions of learning vector quantization. In: Proceedings of the IJCNN-90, International Joint Conference on Neural Networks, San Diego, vol. I, pp. 545–550. IEEE Service Center, Piscataway (1990)
14.
Zurück zum Zitat Sato, A., Yamada, K.: Generalized learning vector quantization. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems 8, Proceedings of the 1995 Conference, pp. 423–429. MIT Press, Cambridge (1996) Sato, A., Yamada, K.: Generalized learning vector quantization. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems 8, Proceedings of the 1995 Conference, pp. 423–429. MIT Press, Cambridge (1996)
15.
Zurück zum Zitat Kaden, M., Lange, M., Nebel, D., Riedel, M., Geweniger, T., Villmann, T.: Aspects in classification learning - review of recent developments in learning vector quantization. Found. Comput. Decis. Sci. 39(2), 79–105 (2014)MathSciNetMATH Kaden, M., Lange, M., Nebel, D., Riedel, M., Geweniger, T., Villmann, T.: Aspects in classification learning - review of recent developments in learning vector quantization. Found. Comput. Decis. Sci. 39(2), 79–105 (2014)MathSciNetMATH
16.
Zurück zum Zitat Schwenk, H., Milgram, M.: Learning discriminant tangent models for handwritten character recognition. In: Fogelman-Soulié, F., Gallinari, P. (eds.) International Conference on Artificial Neural Networks, volume II, pp. 985–988. EC2 and Cie, Paris (1995) Schwenk, H., Milgram, M.: Learning discriminant tangent models for handwritten character recognition. In: Fogelman-Soulié, F., Gallinari, P. (eds.) International Conference on Artificial Neural Networks, volume II, pp. 985–988. EC2 and Cie, Paris (1995)
17.
Zurück zum Zitat Keysers, D., Macherey, W., Ney, H., Dahmen, J.: Adaptation in statistical pattern recognition using tangent vectors. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 269–274 (2004)CrossRef Keysers, D., Macherey, W., Ney, H., Dahmen, J.: Adaptation in statistical pattern recognition using tangent vectors. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 269–274 (2004)CrossRef
18.
Zurück zum Zitat Chang, C.-C., Lin, C.-J.: LIBSVM : a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3:27), 1–27 (2011)CrossRef Chang, C.-C., Lin, C.-J.: LIBSVM : a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3:27), 1–27 (2011)CrossRef
19.
Zurück zum Zitat Rossi, F., Lendasse, A., François, D., Wertz, V., Verleysen, M.: Mutual information for the selection of relevant variables in spectrometric nonlinear modelling. Chemometrics Intell. Lab. Syst. 80, 215–226 (2006)CrossRef Rossi, F., Lendasse, A., François, D., Wertz, V., Verleysen, M.: Mutual information for the selection of relevant variables in spectrometric nonlinear modelling. Chemometrics Intell. Lab. Syst. 80, 215–226 (2006)CrossRef
Metadaten
Titel
Adaptive Hausdorff Distances and Tangent Distance Adaptation for Transformation Invariant Classification Learning
verfasst von
Sascha Saralajew
David Nebel
Thomas Villmann
Copyright-Jahr
2016
DOI
https://doi.org/10.1007/978-3-319-46675-0_40