The aim of image search reranking is to rerank the images obtained by a conventional text-based image search engine to improve the search precision, diversity and so on. Current image reranking methods are often based on a single modality. However, it is hard to find a general modality which can work well for all kinds of queries. This paper proposes a multimodal-based supervised learning for image search reranking. First, for different modalities, different similarity graphs are constructed and different approaches are utilized to calculate the similarity between images on the graph. Exploiting the similarity graphs and the initial list, we integrate the multiple modality into query-independent reranking features, namely PageRank Pseudo Relevance Feedback, Density Feature, Initial Ranking Score Feature, and then fuse them into a 19-dimensional feature vector for each image. After that, the supervised method is employed to learn the weight of each reranking feature. The experiments constructed on the MSRA-MM Dataset demonstrate the improvement in robust and effectiveness of the proposed method.
Weitere Kapitel dieses Buchs durch Wischen aufrufen
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
- Multimodal-Based Supervised Learning for Image Search Reranking