-nearest neighbors (
-NN) classification rule is still an essential tool for computer vision applications, such as scene recognition. However,
-NN still features some major drawbacks, which mainly reside in the
among the nearest prototypes in the feature space.
In this paper, we propose a new method that is able to learn the “relevance” of
, thus classifying test data using a
-NN rule. In particular, our algorithm, called Multi-class Leveraged
-nearest neighbor (MLNN), learns the prototype weights in a
framework, by minimizing a
exponential risk over training data. We propose two main contributions for improving computational speed and accuracy. On the one hand, we implement learning in an inherently
way, thus providing significant computation time reduction over one-versus-all approaches. Furthermore, the leveraging weights enable effective data selection, thus reducing the cost of
-NN search at classification time. On the other hand, we propose a
generalization of our approach to take into account real-valued similarities between data in the feature space, thus enabling more accurate estimation of the local class density.
on three datasets of natural images. Results show that
significantly outperforms classic
-NN and weighted
-NN voting. Furthermore, using an adaptive Gaussian kernel provides significant performance improvement. Finally, the best results are obtained when using
with an appropriate learned metric distance.