ABSTRACT
Algorithmic decisions often result in scoring and ranking individuals to determine credit worthiness, qualifications for college admissions and employment, and compatibility as dating partners. While automatic and seemingly objective, ranking algorithms can discriminate against individuals and protected groups, and exhibit low diversity. Furthermore, ranked results are often unstable -- small changes in the input data or in the ranking methodology may lead to drastic changes in the output, making the result uninformative and easy to manipulate. Similar concerns apply in cases where items other than individuals are ranked, including colleges, academic departments, or products. Despite the ubiquity of rankers, there is, to the best of our knowledge, no technical work that focuses on making rankers transparent.
In this demonstration we present Ranking Facts, a Web-based application that generates a "nutritional label" for rankings. Ranking Facts is made up of a collection of visual widgets that implement our latest research results on fairness, stability, and transparency for rankings, and that communicate details of the ranking methodology, or of the output, to the end user. We will showcase Ranking Facts on real datasets from different domains, including college rankings, criminal risk assessment, and financial services.
- Danielle K. Citron and Frank A. Pasquale. 2014. The Scored Society: Due Process for Automated Predictions. Washington Law Review Vol. 89 (2014).Google Scholar
- Marina Drosou, HV Jagadish, Evaggelia Pitoura, and Julia Stoyanovich. 2017. Diversity in Big Data: A Review. Big Data, Vol. 5, 2 (2017).Google Scholar
- Malcolm Gladwell . February 14, 2011. The order of things. The New Yorker (.February 14, 2011). https://www.newyorker.com/magazine/2011/02/14/the-order-of-thingsGoogle Scholar
- Joshua A. Kroll, Joanna Huey, Solon Barocas, Edward W. Felten, Joel R. Reidenberg, David G. Robinson, and Harlan Yu. 2017. Accountable Algorithms. University of Pennsylvania Law Review Vol. 165 (2017).Google Scholar
- Maayan Perel and Niva Elkin-Koren. 2016. Black Box Tinkering: Beyond Transparency in Algorithmic Enforcement. Florida Law Review (2016).Google Scholar
- Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier ACM SIGKDD. 1135--1144. Google ScholarDigital Library
- Cynthia Rudin. 2014. Algorithms for interpretable machine learning. In ACM SIGKDD. 1519. Google ScholarDigital Library
- Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. Data and Discrimination: Converting Critical Concerns into Productive Inquiry (2014).Google Scholar
- Julia Stoyanovich, Sihem Amer-Yahia, and Tova Milo. 2011. Making interval-based clustering rank-aware. In EDBT. 437--448. Google ScholarDigital Library
- Julia Stoyanovich and Ellen P. Goodman. August 5, 2016. Revealing Algorithmic Rankers. Freedom to Tinker (. August 5, 2016). http://freedom-to-tinker.com/2016/08/05/revealing-algorithmic-rankers/Google Scholar
- Julia Stoyanovich, Ke Yang, and H. V. Jagadish. 2018. Online Set Selection with Fairness and Diversity Constraints EDBT. 241--252.Google Scholar
- Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, and Perry MacNeille. 2016. Bayesian Rule Sets for Interpretable Classification IEEE ICDM. 1269--1274.Google Scholar
- Ke Yang and Julia Stoyanovich. 2017. Measuring Fairness in Ranked Outputs. In SSDBM. 22:1--22:6. Google ScholarDigital Library
- Meike Zehlike, Francesco Bonchi, Carlos Castillo, Sara Hajian, Mohamed Megahed, and Ricardo A. Baeza-Yates. 2017. FA*IR: A Fair Top-k Ranking Algorithm. In ACM CIKM. 1569--1578. Google ScholarDigital Library
- Indre Zliobaite. 2017. Measuring discrimination in algorithmic decision making. Data Min. Knowl. Discov. Vol. 31, 4 (2017), 1060--1089. Google ScholarDigital Library
Index Terms
- A Nutritional Label for Rankings
Recommendations
Measuring Fairness in Ranked Outputs
SSDBM '17: Proceedings of the 29th International Conference on Scientific and Statistical Database ManagementRanking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative ...
What's Going on in Search Engine Rankings?
AINAW '08: Proceedings of the 22nd International Conference on Advanced Information Networking and Applications - WorkshopsMany people use search engines every day to retrieve documents from the Web. Although the social influence of search engine rankings has become significant, ranking algorithms are not disclosed. In this paper, we have investigated three major search ...
Comparing rankings of search results on the Web
Special issue: InfometricsThe Web has become an information source for professional data gathering. Because of the vast amounts of information on almost all topics, one cannot systematically go over the whole set of results, and therefore must rely on the ordering of the results ...
Comments