ABSTRACT
Gradient Boosted Regression Trees (GBRT) are the current state-of-the-art learning paradigm for machine learned web-search ranking - a domain notorious for very large data sets. In this paper, we propose a novel method for parallelizing the training of GBRT. Our technique parallelizes the construction of the individual regression trees and operates using the master-worker paradigm as follows. The data are partitioned among the workers. At each iteration, the worker summarizes its data-partition using histograms. The master processor uses these to build one layer of a regression tree, and then sends this layer to the workers, allowing the workers to build histograms for the next layer. Our algorithm carefully orchestrates overlap between communication and computation to achieve good performance.
Since this approach is based on data partitioning, and requires a small amount of communication, it generalizes to distributed and shared memory machines, as well as clouds. We present experimental results on both shared memory machines and clusters for two large scale web search ranking data sets. We demonstrate that the loss in accuracy induced due to the histogram approximation in the regression tree creation can be compensated for through slightly deeper trees. As a result, we see no significant loss in accuracy on the Yahoo data sets and a very small reduction in accuracy for the Microsoft LETOR data. In addition, on shared memory machines, we obtain almost perfect linear speed-up with up to about 48 cores on the large data sets. On distributed memory machines, we get a speedup of 25 with 32 processors. Due to data partitioning our approach can scale to even larger data sets, on which one can reasonably expect even higher speedups.
- N. Amado, J. Gama, and F. Silva. Parallel implementation of decision tree learning algorithms. Progress in Artificial Intelligence, pages 34--52, 2001.Google ScholarCross Ref
- Y. Ben-Haim and E. Yom-Tov. A streaming parallel decision tree algorithm. The Journal of Machine Learning Research, 11:849--872, 2010. Google ScholarDigital Library
- L. Breiman. Bagging predictors. Machine learning, 24(2):123--140, 1996. Google ScholarCross Ref
- L. Breiman. Random forests. Machine learning, 45(1):5--32, 2001. Google ScholarDigital Library
- L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and regression trees. Chapman & Hall/CRC, 1984.Google Scholar
- C. Burges. From RankNet to LambdaRank to LambdaMART: An Overview. 2010.Google Scholar
- C. Burges, T. Shaked, E. Renshaw, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Internation Conference on Machine Learning, pages 89--96, 2005. Google ScholarDigital Library
- Z. Cao and T.-Y. Liu. Learning to rank: From pairwise approach to listwise approach. In Proceedings of the 24th International Conference on Machine Learning, pages 129--136, 2007. Google ScholarDigital Library
- O. Chapelle and Y. Chang. Yahoo! Learning to Rank Challenge overview. Journal of Machine Learning Research, Workshop and Conference Proceedings, 14:1--24, 2011.Google Scholar
- O. Chapelle, D. Metlzer, Y. Zhang, and P. Grinspan. Expected reciprocal rank for graded relevance. In Proceeding of the 18th ACM Conference on Information and Knowledge Management, pages 621--630. ACM, 2009. Google ScholarDigital Library
- O. Chapelle and M. Wu. Gradient descent optimization of smoothed information retrieval metrics. Information Retrieval Journal, Special Issue on Learning to Rank for Information Retrieval, 2010. to appear. Google ScholarDigital Library
- J. Darlington, Y. Guo, J. Sutiwaraphun, and H. To. Parallel induction algorithms for data mining. Advances in Intelligent Data Analysis Reasoning about Data, pages 437--445, 1997. Google ScholarDigital Library
- A. Freitas and S. Lavington. Mining very large databases with parallel processing. Springer, 1998. Google ScholarDigital Library
- J. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29:1189--1232, 2001.Google ScholarCross Ref
- J. Gehrke, R. Ramakrishnan, and V. Ganti. RainForest - a framework for fast decision tree construction of large datasets. Data Mining and Knowledge Discovery, 4(2):127--162, 2000. Google ScholarDigital Library
- R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression, pages 115--132. MIT Press, Cambridge, MA, 2000.Google Scholar
- K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4):422--446, 2002. Google ScholarDigital Library
- T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD). ACM, 2002. Google ScholarDigital Library
- A. Lazarevic and Z. Obradovic. Boosting algorithms for parallel and distributed learning. Distributed and Parallel Databases, 11(2):203--229, 2002. Google ScholarDigital Library
- P. Li, C. J. C. Burges, and Q. Wu. Mcrank: Learning to rank using multiple classification and gradient boosting. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, NIPS. MIT Press, 2007.Google Scholar
- T. Liu, J. Xu, T. Qin, W. Xiong, and H. Li. Letor: Benchmark dataset for research on learning to rank for information retrieval. In Proceedings of SIGIR 2007 Workshop on Learning to Rank for Information Retrieval, pages 3--10, 2007.Google Scholar
- A. Mohan, Z. Chen, and K. Q. Weinberger. Web-search ranking with initialized gradient boosted regression trees. Journal of Machine Learning Research, Workshop and Conference Proceedings, 14:77--89, 2011.Google Scholar
- B. Panda, J. Herbach, S. Basu, and R. Bayardo. Planet: Massively parallel learning of tree ensembles with mapreduce. Proceedings of the Very Large Database Endowment, 2(2):1426--1437, 2009. Google ScholarDigital Library
- D. Pavlov and C. Brunk. Bagboo: Bagging the gradient boosting. Talk at Workshop on Websearch Ranking at the 27th International Conference on Machine Learning, 2010.Google Scholar
- J. Shafer, R. Agrawal, and M. Mehta. SPRINT: A scalable parallel classifier for data mining. In Proceedings of the International Conference on Very Large Data Bases, pages 544--555, 1996. Google ScholarDigital Library
- M. Snir. MPI - the Complete Reference: The MPI core. The MIT Press, 1998. Google ScholarDigital Library
- A. Srivastava, E. Han, V. Kumar, and V. Singh. Parallel formulations of decision-tree classification algorithms. High Performance Data Mining, pages 237--261, 2002. Google ScholarDigital Library
- M. Taylor, J. Guiver, S. Robertson, and T. Minka. SoftRank: optimizing non-smooth rank metrics. In Proc. 1st ACM Int'l Conf. on Web Search and Data Mining, pages 77--86, 2008. Google ScholarDigital Library
- N. Uyen and T. Chung. A new framework for distributed boosting algorithm. Future Generation Communication and Networking, 1:420--423, 2007. Google ScholarDigital Library
- G. Webb. Multiboosting: A technique for combining boosting and wagging. Machine learning, 40(2):159--196, 2000. Google ScholarDigital Library
- J. Ye, J. Chow, J. Chen, and Z. Zheng. Stochastic gradient boosted distributed decision trees. In CIKM '09: Proceeding of the 18th ACM Conference on Information and Knowledge Management, pages 2061--2064. ACM, 2009. Google ScholarDigital Library
- C. Yu and D. Skillicorn. Parallelizing boosting and bagging. 2001.Google Scholar
- Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In Proc. 30th Int'l ACM SIGIR Conf. on Research and Development in Information Retrieval, pages 271--278, 2007. Google ScholarDigital Library
- Z. Zheng, H. Zha, T. Zhang, O. Chapelle, K. Chen, and G. Sun. A general boosting method and its application to learning ranking functions for web search. In NIPS, 2007.Google Scholar
Recommendations
Web-search ranking with initialized gradient boosted regression trees
YLRC'10: Proceedings of the 2010 International Conference on Yahoo! Learning to Rank Challenge - Volume 14In May 2010 Yahoo! Inc. hosted the Learning to Rank Challenge. This paper summarizes the approach by the highly placed team Washington University in St. Louis. We investigate Random Forests (RF) as a low-cost alternative algorithm to Gradient Boosted ...
Improving Ranking Consistency for Web Search by Leveraging a Knowledge Base and Search Logs
CIKM '15: Proceedings of the 24th ACM International on Conference on Information and Knowledge ManagementIn this paper, we propose a new idea called ranking consistency in web search. Relevance ranking is one of the biggest problems in creating an effective web search system. Given some queries with similar search intents, conventional approaches typically ...
Re-ranking search results using query logs
CIKM '06: Proceedings of the 15th ACM international conference on Information and knowledge managementThis work addresses two common problems in search, frequently occurring with underspecified user queries: the top-ranked results for such queries may not contain documents relevant to the user's search intent, and fresh and relevant pages may not get ...
Comments