skip to main content
10.1145/3183713.3193568acmconferencesArticle/Chapter ViewAbstractPublication PagesmodConference Proceedingsconference-collections
research-article
Public Access

A Nutritional Label for Rankings

Published:27 May 2018Publication History

ABSTRACT

Algorithmic decisions often result in scoring and ranking individuals to determine credit worthiness, qualifications for college admissions and employment, and compatibility as dating partners. While automatic and seemingly objective, ranking algorithms can discriminate against individuals and protected groups, and exhibit low diversity. Furthermore, ranked results are often unstable -- small changes in the input data or in the ranking methodology may lead to drastic changes in the output, making the result uninformative and easy to manipulate. Similar concerns apply in cases where items other than individuals are ranked, including colleges, academic departments, or products. Despite the ubiquity of rankers, there is, to the best of our knowledge, no technical work that focuses on making rankers transparent.

In this demonstration we present Ranking Facts, a Web-based application that generates a "nutritional label" for rankings. Ranking Facts is made up of a collection of visual widgets that implement our latest research results on fairness, stability, and transparency for rankings, and that communicate details of the ranking methodology, or of the output, to the end user. We will showcase Ranking Facts on real datasets from different domains, including college rankings, criminal risk assessment, and financial services.

References

  1. Danielle K. Citron and Frank A. Pasquale. 2014. The Scored Society: Due Process for Automated Predictions. Washington Law Review Vol. 89 (2014).Google ScholarGoogle Scholar
  2. Marina Drosou, HV Jagadish, Evaggelia Pitoura, and Julia Stoyanovich. 2017. Diversity in Big Data: A Review. Big Data, Vol. 5, 2 (2017).Google ScholarGoogle Scholar
  3. Malcolm Gladwell . February 14, 2011. The order of things. The New Yorker (.February 14, 2011). https://www.newyorker.com/magazine/2011/02/14/the-order-of-thingsGoogle ScholarGoogle Scholar
  4. Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2017. Accountable Algorithms. University of Pennsylvania Law Review Vol. 165 (2017).Google ScholarGoogle Scholar
  5. Maayan Perel and Niva Elkin-Koren. 2016. Black Box Tinkering: Beyond Transparency in Algorithmic Enforcement. Florida Law Review (2016).Google ScholarGoogle Scholar
  6. Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier ACM SIGKDD. 1135--1144. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Cynthia Rudin. 2014. Algorithms for interpretable machine learning. In ACM SIGKDD. 1519. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry (2014).Google ScholarGoogle Scholar
  9. Julia Stoyanovich, Sihem Amer-Yahia, and Tova Milo. 2011. Making interval-based clustering rank-aware. In EDBT. 437--448. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Julia Stoyanovich and Ellen P. Goodman. August 5, 2016. Revealing Algorithmic Rankers. Freedom to Tinker (. August 5, 2016). http://freedom-to-tinker.com/2016/08/05/revealing-algorithmic-rankers/Google ScholarGoogle Scholar
  11. Julia Stoyanovich, Ke Yang, and H. V. Jagadish. 2018. Online Set Selection with Fairness and Diversity Constraints EDBT. 241--252.Google ScholarGoogle Scholar
  12. Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, and Perry MacNeille. 2016. Bayesian Rule Sets for Interpretable Classification IEEE ICDM. 1269--1274.Google ScholarGoogle Scholar
  13. Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In SSDBM. 22:1--22:6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo A. Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking Algorithm. In ACM CIKM. 1569--1578. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Indre Zliobaite. 2017. Measuring discrimination in algorithmic decision making. Data Min. Knowl. Discov. Vol. 31, 4 (2017), 1060--1089. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. A Nutritional Label for Rankings

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          SIGMOD '18: Proceedings of the 2018 International Conference on Management of Data
          May 2018
          1874 pages
          ISBN:9781450347037
          DOI:10.1145/3183713

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 27 May 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          SIGMOD '18 Paper Acceptance Rate90of461submissions,20%Overall Acceptance Rate785of4,003submissions,20%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader