ABSTRACT
The paper is concerned with applying learning to rank to document retrieval. Ranking SVM is a typical method of learning to rank. We point out that there are two factors one must consider when applying Ranking SVM, in general a "learning to rank" method, to document retrieval. First, correctly ranking documents on the top of the result list is crucial for an Information Retrieval system. One must conduct training in a way that such ranked results are accurate. Second, the number of relevant documents can vary from query to query. One must avoid training a model biased toward queries with a large number of relevant documents. Previously, when existing methods that include Ranking SVM were applied to document retrieval, none of the two factors was taken into consideration. We show it is possible to make modifications in conventional Ranking SVM, so it can be better used for document retrieval. Specifically, we modify the "Hinge Loss" function in Ranking SVM to deal with the problems described above. We employ two methods to conduct optimization on the loss function: gradient descent and quadratic programming. Experimental results show that our method, referred to as Ranking SVM for IR, can outperform the conventional Ranking SVM and other existing methods for document retrieval on two datasets.
- C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to Rank using Gradient Descent. Proceedings of the 22nd International Conference on Machine Learning, 2005. Google ScholarDigital Library
- W. Chu and Z. Ghahramani. Gaussian Process for Ordinal Regression. Technical Report. Univ. College London, 2004.Google Scholar
- W. Chu and S. Keerthi. New Approaches to Support Vector Ordinal Regression. Proceedings of International Conference on Machine Learning, 2005. Google ScholarDigital Library
- K. Crammer and Y. Singer. Pranking with Ranking. Advances in Neural Information Processing Systems 14, pages 641--647, 2002.Google Scholar
- J. Gao, H. Qi, X. Xia, and J. Nie. Linear Discriminant Model for Information Retrieval. Proceedings of the 28th Annual ACM SIGIR Conference, 2005. Google ScholarDigital Library
- E. F. Harrington. Online Ranking/Collaborative filtering using the Perceptron Algorithm. Proceedings of the 20th International Conference on Machine Learning, pages 250--257, 2003.Google Scholar
- T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: data mining, inference and prediction. Springer-Verlag, 2001.Google Scholar
- R. Herbrich, T. Graepel, and K. Obermayer. Large Margin Rank Boundaries for Ordinal Regression. Advances in Large Margin Classifiers, pages 115--132, 2000.Google Scholar
- W. R. Hersh, C. Buckley, T. J. Leone, and D. H. Hickam. OHSUMED: An interactive retrieval evaluation and new large test collection for research. Proceedings of the 17th Annual ACM SIGIR Conference, pages 192--201, 1994. Google ScholarDigital Library
- K. Jarvelin and J. Kekalainen. IR evaluation methods for retrieving highly relevant documents. Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 41--48, 2000. Google ScholarDigital Library
- T. Joachims. Optimizing Search Engines Using Clickthrough Data. Proceedings of the ACM Conference on Knowledge Discovery and Data Mining, 2002. Google ScholarDigital Library
- R. Nallapati. Discriminative models for information retrieval. Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 64--71, 2004. Google ScholarDigital Library
- information retrieval. Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 275--281, 1998.Google Scholar
- S. Robertson and D. A. Hull. The TREC-9 Filtering Track Final Report. Proceedings of the 9th Text REtrieval Conference, pages 25--40, 2000.Google Scholar
- A. Shashua and A. Levin. Taxonomy of Large Margin Principle Algorithm for Ordinal Regression Problems. Advances in Neural Information Processing Systems 15. Cambridge, MA: MIT Press, 2000.Google Scholar
- C. Silverstein, M. Henzinger, H. Marais, and M. Moricz. Analysis of a Very Large AltaVista Query Log. Technical Report SRC 1998-014, Digital Systems Research Center, 1998.Google Scholar
- A. Spink, B. J. Jansen, D. Wolfram, and T. Saracevic. From e-sex to e-commerce: web search changes. IEEE Computer, 35(3), pages 107--109, 2002. Google ScholarDigital Library
- A. Spink, D. Wolfram, B. J. Jansen, and T. Saracevic. Searching the web: the public and their queries. Journal of the American Society of Information Science and Technology, 52(3), pages 226--234, 2001. Google ScholarDigital Library
- Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4, pages 933--969, 2003. Google ScholarDigital Library
- G. Salton. The SMART Retrieval System: Experiments in automatic document processing. Prentice-Hall, Englewood Cliffs, NJ, 1971. Google ScholarDigital Library
- J. Lafferty and C. Zhai Document Language Models, Query Models, and Risk Minimization for Information Retrieval. Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 111--119, 2001. Google ScholarDigital Library
- D. Grangier and S. Bengio. Exploiting Hyperlinks to Learn a Retrieval Model. Proceedings of NIPS Workshop, 2005.Google Scholar
- S. Rajaram and S. Agarwal Generalization Bounds for k-Partite Ranking. Proceedings of NIPS Workshop, 2005.Google Scholar
- C. Burges. Ranking as Learning Structured Outputs. Proceedings of NIPS Workshop, 2005.Google Scholar
- N. Usunier, V. Truong, R. A. Massih, and P. Gallinari, Ranking with Unlabeled Data: A First Study. Proceedings of NIPS Workshop, 2005.Google Scholar
- W. Chu and Z. Ghahramani Extensions of Gaussian Processes for Ranking: Semi-Supervised and Active Learning. Proceedings of NIPS Workshop, 2005.Google Scholar
- S. Yu, K. Yu, and V. Tresp, Collaborative Ordinal Regression. Proceedings of NIPS Workshop, 2005.Google Scholar
- K. Morik, P. Brockhausen, and T. Joachims. Combining statistical learning with a knowledge-based approach - A case study in intensive care monitoring. Proceedings of ICML, 1999. Google ScholarDigital Library
Index Terms
- Adapting ranking SVM to document retrieval
Recommendations
Learning to Rank with Ensemble Ranking SVM
In this paper, we propose a novel learning to rank method using Ensemble Ranking SVM. Ensemble Ranking SVM is based on Ranking SVM which has been commonly used for learning to rank. The basic idea of Ranking SVM is to formulate the problem of learning ...
Ranking with multiple hyperplanes
SIGIR '07: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrievalThe central problem for many applications in Information Retrieval is ranking and learning to rank is considered as a promising approach for addressing the issue. Ranking SVM, for example, is a state-of-the-art method for learning to rank and has been ...
Learning to rank with document ranks and scores
The problem of ''Learning to rank'' is a popular research topic in Information Retrieval (IR) and machine learning communities. Some existing list-wise methods, such as AdaRank, directly use the IR measures as performance functions to quantify how well ...
Comments