skip to main content
10.1145/2043932.2043958acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
research-article

Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit

Published:23 October 2011Publication History

ABSTRACT

Recommender systems research is being slowed by the difficulty of replicating and comparing research results. Published research uses various experimental methodologies and metrics that are difficult to compare. It also often fails to sufficiently document the details of proposed algorithms or the evaluations employed. Researchers waste time reimplementing well-known algorithms, and the new implementations may miss key details from the original algorithm or its subsequent refinements. When proposing new algorithms, researchers should compare them against finely-tuned implementations of the leading prior algorithms using state-of-the-art evaluation methodologies. With few exceptions, published algorithmic improvements in our field should be accompanied by working code in a standard framework, including test harnesses to reproduce the described results. To that end, we present the design and freely distributable source code of LensKit, a flexible platform for reproducible recommender systems research. LensKit provides carefully tuned implementations of the leading collaborative filtering algorithms, APIs for common recommender system use cases, and an evaluation framework for performing reproducible offline evaluations of algorithms. We demonstrate the utility of LensKit by replicating and extending a set of prior comparative studies of recommender algorithms --- showing limitations in some of the original results --- and by investigating a question recently raised by a leader in the recommender systems community on problems with error-based prediction evaluation.

References

  1. X. Amatriain. Recommender systems: We're doing it (all) wrong. http://technocalifornia.blogspot.com/2011/04/recommender-systems-were-doing-it-all.html, Apr. 2011.Google ScholarGoogle Scholar
  2. N. W. H. Blaikie. Analyzing quantitative data: from description to explanation. SAGE, Mar. 2003.Google ScholarGoogle Scholar
  3. J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In UAI 1998, pages 43--52. AAAI, 1998. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. R. Burke. Evaluating the dynamic properties of recommendation algorithms. In ACM RecSys '10, pages 225--228. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. M. Fowler. Inversion of control containers and the dependency injection pattern. http://martinfowler.com/articles/injection.html, Jan. 2004.Google ScholarGoogle Scholar
  6. S. Funk. Netflix update: Try this at home. http://sifter.org/simon/journal/20061211.html, Dec. 2006.Google ScholarGoogle Scholar
  7. M. Hall, E. Frank, G. Holmes, B. Pfahringer, and P. Reutemann. The WEKA data mining software: An update. SIGKDD Explorations, 11(1), 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Herlocker, J. A. Konstan, and J. Riedl. An empirical analysis of design choices in Neighborhood-Based collaborative filtering algorithms. Inf. Retr., 5(4):287--310, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. L. Herlocker, J. A. Konstan, L. G. Terveen, and J. T. Riedl. Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst., 22(1):5--53, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. K. Jarvelin and J. Kekalainen. Cumulated gain-based a aa evaluation of IR techniques. ACM Trans. Inf. Syst. (TOIS), 20(4):422--446, Oct. 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In ACM KDD '08, pages 426--434. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. N. Lathia, S. Hailes, and L. Capra. Evaluating collaborative filtering over time. In SIGIR '09 Workshop on the Future of IR Evaluation, July 2009.Google ScholarGoogle Scholar
  13. J. Levandoski, M. Ekstrand, J. Riedl, and M. Mokbel. RecBench: benchmarks for evaluating performance of recommender system architectures. In VLDB 2011, 2011.Google ScholarGoogle Scholar
  14. A. Paterek. Improving regularized singular value decomposition for collaborative filtering. In KDD Cup and Workshop 2007, Aug. 2007.Google ScholarGoogle Scholar
  15. P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. GroupLens: an open architecture for collaborative filtering of netnews. In ACM CSCW '94, pages 175--186. ACM, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. B. Sarwar, G. Karypis, J. Konstan, and J. Reidl. Item-based collaborative filtering recommendation algorithms. In ACM WWW '01, pages 285--295. ACM, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. G. Shani and A. Gunawardana. Evaluating recommendation systems. In F. Ricci, L. Rokach, B. Shapira, and P. B. Kantor, editors, Recommender Systems Handbook, pages 257--297. Springer, 2010.Google ScholarGoogle Scholar

Index Terms

  1. Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        RecSys '11: Proceedings of the fifth ACM conference on Recommender systems
        October 2011
        414 pages
        ISBN:9781450306836
        DOI:10.1145/2043932

        Copyright © 2011 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 October 2011

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate254of1,295submissions,20%

        Upcoming Conference

        RecSys '24
        18th ACM Conference on Recommender Systems
        October 14 - 18, 2024
        Bari , Italy

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader