ABSTRACT
We consider the problem of learning a similarity function from a set of positive equivalence constraints, i.e. 'similar' point pairs. We define the similarity in information theoretic terms, as the gain in coding length when shifting from independent encoding of the pair to joint encoding. Under simple Gaussian assumptions, this formulation leads to a non-Mahalanobis similarity function which is efficient and simple to learn. This function can be viewed as a likelihood ratio test, and we show that the optimal similarity-preserving projection of the data is a variant of Fisher Linear Discriminant. We also show that under some naturally occurring sampling conditions of equivalence constraints, this function converges to a known Mahalanobis distance (RCA). The suggested similarity function exhibits superior performance over alternative Mahalanobis distances learnt from the same data. Its superiority is demonstrated in the context of image retrieval and graph based clustering, using a large number of data sets.
- Bar-Hillel, A., Hertz, T., Shental, N., & Weinshall, D. (2005). Learning a mahalanobis metric from equivalence constraints. JMLR, 6(Jun), 937--965. Google ScholarDigital Library
- Belongie, S., Malik, J., & Puzicha, J. (2002). Shape matching and object recognition using shape context. IEEE PAMI, 24, 509--522. Google ScholarDigital Library
- Bennett, C., Gacs, P., Li, M., Vitanyi, P., & Zurek, W. (1998). Information Distance. IEEE Trans. Information Theory, 44, 1407. Google ScholarDigital Library
- Bilenko, M., Basu, S., & Mooney, R. (2004). Integrating constraints and metric learning in semi-supervised clustering. Proc. ICML (pp. 81--88). Google ScholarDigital Library
- Blake, C., & Merz, C. (1998). UCI repository of machine learning databases.Google Scholar
- Chang, H., & Yeung, D. (2005). Stepwise metric adaptation based on semi-supervised learning for boosting image retrieval performance. BMVC.Google Scholar
- Cover, T., & Thomas, J. (1991). Elements of information theory. John Wiley and Sons Inc. Google ScholarDigital Library
- Cristianini, N., Kandola, J., Elissee, A., & Shawe-Taylor, J. (2002). On kernel target alignment. Proc. NIPS.Google Scholar
- De-Bie, T., Momma, M., & Cristianini, N. (2003). Efficiently learn the metric with side information. Lecture Notes in Artificial Intelligence (pp. 175 -- 189).Google Scholar
- Duda, R., Hart, P., & Stork, D. (2001). Pattern Classification. John Wiley and Sons Inc. Google ScholarDigital Library
- Georghiades, A., Belhumeur, P., & Kriegman, D. (2000). From few to many: Generative models for recognition under variable pose and illumination. IEEE Int. Conf. on Automatic Face and Gesture Recognition. Google ScholarDigital Library
- Hertz, T., Bar-Hillel, A., & Weinshall, D. (2004). Boosting margin based distance functions for clustering. Proc. ICML. Google ScholarDigital Library
- Hertz, T., Shental, S., Bar-Hillel, A., & Weinshall, D. (2003). Enhancing image and video retrieval: Learning with equivalence constraints. Proc. CVPR.Google ScholarCross Ref
- Kemp, C., Bernstein, A., & Tenenbaum, J. (2005). A generative theory of similarity. The Twenty-Seventh Annual Conference of the Cognitive Science Society.Google Scholar
- LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proc. IEEE, 86, 2278--2324.Google ScholarCross Ref
- Lin, D. (1998). An information-theoretic definition of similarity. Proc. ICML. Google ScholarDigital Library
- Pass, G., Zabih, R., & Miller, J. (1996). Comparing images using color coherence vectors. ACM Multimedia. Google ScholarDigital Library
- Shental, N., Hertz, T., Weinshall, D., & Pavel, M. (2002). Adjustment learning and relevant component analysis. Proc. ECCV. Google ScholarDigital Library
- Tishby, N., Pereira, F., & Bialek, W. (2000). The information bottleneck method. Arxiv preprint physics/0004057.Google Scholar
- Xing, E., Ng, A., Jordan, M., & Russell, S. (2002). Distance metric learnign with application to clustering with side-information. Proc. NIPS.Google Scholar
- Zhang, H., Berg, A., Maire, M., & Malik, J. (2006). Svmknn: Discriminative nearest neighbor classification for visual category recognition. Proc. CVPR. Google ScholarDigital Library
- Ziv, J., & Merhav, N. (1993). A measure of relative entropy between individual sequences withapplication to universal classification. IEEE Trans. Information Theory, 39, 1270--1279.Google ScholarDigital Library
- Learning distance function by coding similarity
Recommendations
Geodesic distance function learning via heat flow on vector fields
ICML'14: Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32Learning a distance function or metric on a given data manifold is of great importance in machine learning and pattern recognition. Many of the previous works first embed the manifold to Euclidean space and then learn the distance function. However, ...
Learning Topographic Sparse Coding through Similarity Function
ICNC '08: Proceedings of the 2008 Fourth International Conference on Natural Computation - Volume 03In this paper, we present a novel method to learn the topographic and sparse representation from the natural images. By using two kinds of similarity functions onto the sparse coding bases learned from natural images respectively, we find that these ...
Distance Learning for Similarity Estimation
In this paper, we present a general guideline to find a better distance measure for similarity estimation based on statistical analysis of distribution models and distance functions. A new set of distance measures are derived from the harmonic distance, ...
Comments