skip to main content
10.1145/1963405.1963461acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Parallel boosted regression trees for web search ranking

Published:28 March 2011Publication History

ABSTRACT

Gradient Boosted Regression Trees (GBRT) are the current state-of-the-art learning paradigm for machine learned web-search ranking - a domain notorious for very large data sets. In this paper, we propose a novel method for parallelizing the training of GBRT. Our technique parallelizes the construction of the individual regression trees and operates using the master-worker paradigm as follows. The data are partitioned among the workers. At each iteration, the worker summarizes its data-partition using histograms. The master processor uses these to build one layer of a regression tree, and then sends this layer to the workers, allowing the workers to build histograms for the next layer. Our algorithm carefully orchestrates overlap between communication and computation to achieve good performance.

Since this approach is based on data partitioning, and requires a small amount of communication, it generalizes to distributed and shared memory machines, as well as clouds. We present experimental results on both shared memory machines and clusters for two large scale web search ranking data sets. We demonstrate that the loss in accuracy induced due to the histogram approximation in the regression tree creation can be compensated for through slightly deeper trees. As a result, we see no significant loss in accuracy on the Yahoo data sets and a very small reduction in accuracy for the Microsoft LETOR data. In addition, on shared memory machines, we obtain almost perfect linear speed-up with up to about 48 cores on the large data sets. On distributed memory machines, we get a speedup of 25 with 32 processors. Due to data partitioning our approach can scale to even larger data sets, on which one can reasonably expect even higher speedups.

References

  1. N. Amado, J. Gama, and F. Silva. Parallel implementation of decision tree learning algorithms. Progress in Artificial Intelligence, pages 34--52, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  2. Y. Ben-Haim and E. Yom-Tov. A streaming parallel decision tree algorithm. The Journal of Machine Learning Research, 11:849--872, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. L. Breiman. Bagging predictors. Machine learning, 24(2):123--140, 1996. Google ScholarGoogle ScholarCross RefCross Ref
  4. L. Breiman. Random forests. Machine learning, 45(1):5--32, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen. Classification and regression trees. Chapman & Hall/CRC, 1984.Google ScholarGoogle Scholar
  6. C. Burges. From RankNet to LambdaRank to LambdaMART: An Overview. 2010.Google ScholarGoogle Scholar
  7. C. Burges, T. Shaked, E. Renshaw, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank using gradient descent. In Internation Conference on Machine Learning, pages 89--96, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Z. Cao and T.-Y. Liu. Learning to rank: From pairwise approach to listwise approach. In Proceedings of the 24th International Conference on Machine Learning, pages 129--136, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. O. Chapelle and Y. Chang. Yahoo! Learning to Rank Challenge overview. Journal of Machine Learning Research, Workshop and Conference Proceedings, 14:1--24, 2011.Google ScholarGoogle Scholar
  10. O. Chapelle, D. Metlzer, Y. Zhang, and P. Grinspan. Expected reciprocal rank for graded relevance. In Proceeding of the 18th ACM Conference on Information and Knowledge Management, pages 621--630. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. O. Chapelle and M. Wu. Gradient descent optimization of smoothed information retrieval metrics. Information Retrieval Journal, Special Issue on Learning to Rank for Information Retrieval, 2010. to appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Darlington, Y. Guo, J. Sutiwaraphun, and H. To. Parallel induction algorithms for data mining. Advances in Intelligent Data Analysis Reasoning about Data, pages 437--445, 1997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. Freitas and S. Lavington. Mining very large databases with parallel processing. Springer, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29:1189--1232, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  15. J. Gehrke, R. Ramakrishnan, and V. Ganti. RainForest - a framework for fast decision tree construction of large datasets. Data Mining and Knowledge Discovery, 4(2):127--162, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression, pages 115--132. MIT Press, Cambridge, MA, 2000.Google ScholarGoogle Scholar
  17. K. Järvelin and J. Kekäläinen. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4):422--446, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD). ACM, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. A. Lazarevic and Z. Obradovic. Boosting algorithms for parallel and distributed learning. Distributed and Parallel Databases, 11(2):203--229, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. Li, C. J. C. Burges, and Q. Wu. Mcrank: Learning to rank using multiple classification and gradient boosting. In J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, NIPS. MIT Press, 2007.Google ScholarGoogle Scholar
  21. T. Liu, J. Xu, T. Qin, W. Xiong, and H. Li. Letor: Benchmark dataset for research on learning to rank for information retrieval. In Proceedings of SIGIR 2007 Workshop on Learning to Rank for Information Retrieval, pages 3--10, 2007.Google ScholarGoogle Scholar
  22. A. Mohan, Z. Chen, and K. Q. Weinberger. Web-search ranking with initialized gradient boosted regression trees. Journal of Machine Learning Research, Workshop and Conference Proceedings, 14:77--89, 2011.Google ScholarGoogle Scholar
  23. B. Panda, J. Herbach, S. Basu, and R. Bayardo. Planet: Massively parallel learning of tree ensembles with mapreduce. Proceedings of the Very Large Database Endowment, 2(2):1426--1437, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. D. Pavlov and C. Brunk. Bagboo: Bagging the gradient boosting. Talk at Workshop on Websearch Ranking at the 27th International Conference on Machine Learning, 2010.Google ScholarGoogle Scholar
  25. J. Shafer, R. Agrawal, and M. Mehta. SPRINT: A scalable parallel classifier for data mining. In Proceedings of the International Conference on Very Large Data Bases, pages 544--555, 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. M. Snir. MPI - the Complete Reference: The MPI core. The MIT Press, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. Srivastava, E. Han, V. Kumar, and V. Singh. Parallel formulations of decision-tree classification algorithms. High Performance Data Mining, pages 237--261, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. M. Taylor, J. Guiver, S. Robertson, and T. Minka. SoftRank: optimizing non-smooth rank metrics. In Proc. 1st ACM Int'l Conf. on Web Search and Data Mining, pages 77--86, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. N. Uyen and T. Chung. A new framework for distributed boosting algorithm. Future Generation Communication and Networking, 1:420--423, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. G. Webb. Multiboosting: A technique for combining boosting and wagging. Machine learning, 40(2):159--196, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. J. Ye, J. Chow, J. Chen, and Z. Zheng. Stochastic gradient boosted distributed decision trees. In CIKM '09: Proceeding of the 18th ACM Conference on Information and Knowledge Management, pages 2061--2064. ACM, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. C. Yu and D. Skillicorn. Parallelizing boosting and bagging. 2001.Google ScholarGoogle Scholar
  33. Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In Proc. 30th Int'l ACM SIGIR Conf. on Research and Development in Information Retrieval, pages 271--278, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Z. Zheng, H. Zha, T. Zhang, O. Chapelle, K. Chen, and G. Sun. A general boosting method and its application to learning ranking functions for web search. In NIPS, 2007.Google ScholarGoogle Scholar

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    WWW '11: Proceedings of the 20th international conference on World wide web
    March 2011
    840 pages
    ISBN:9781450306324
    DOI:10.1145/1963405

    Copyright © 2011 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 28 March 2011

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article

    Acceptance Rates

    Overall Acceptance Rate1,899of8,196submissions,23%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader