skip to main content
10.1145/3097983.3098095acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Algorithmic Decision Making and the Cost of Fairness

Published:04 August 2017Publication History

ABSTRACT

Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.

References

  1. Lindsay C Ahlman and Ellen M Kurtz 2009. The APPD randomized controlled trial in low risk supervision: The effects on low risk supervision on rearrest. Philadelphia Adult Probation and Parole Department (2009).Google ScholarGoogle Scholar
  2. Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner 2016. Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. ProPublica (5 2016).Google ScholarGoogle Scholar
  3. Shamena Anwar and Hanming Fang 2006. An Alternative Test of Racial Prejudice in Motor Vehicle Searches: Theory and Evidence. The American Economic Review (2006).Google ScholarGoogle Scholar
  4. Ian Ayres. 2002. Outcome tests of racial disparities in police practices. Justice research and Policy Vol. 4, 1--2 (2002), 131--142.Google ScholarGoogle Scholar
  5. Solon Barocas and Andrew D Selbst 2016. Big data's disparate impact. California Law Review Vol. 104 (2016).Google ScholarGoogle Scholar
  6. Gary S Becker. 1957. The economics of discrimination. University of Chicago Press.Google ScholarGoogle Scholar
  7. Richard Berk. 2009. The role of race in forecasts of violent crime. Race and Social Problems (2009), 131--242. Google ScholarGoogle ScholarCross RefCross Ref
  8. Richard Berk. 2012. Criminal justice forecasts of risk: a machine learning approach. Springer Science & Business Media. Google ScholarGoogle ScholarCross RefCross Ref
  9. Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth 2017. Fairness in Criminal Justice Risk Assessments: The State of the Art. working paper (2017).Google ScholarGoogle Scholar
  10. Richard Berk, Lawrence Sherman, Geoffrey Barnes, Ellen Kurtz, and Lindsay Ahlman 2009. Forecasting murder within a population of probationers and parolees: a high stakes application of statistical learning. Journal of the Royal Statistical Society: Series A (Statistics in Society), Vol. 172, 1 (2009), 191--211. Google ScholarGoogle ScholarCross RefCross Ref
  11. James A Berkovec, Glenn B Canner, Stuart A Gabriel, and Timothy H Hannan 1994. Race, redlining, and residential mortgage loan performance. The Journal of Real Estate Finance and Economics, Vol. 9, 3 (1994), 263--294.Google ScholarGoogle ScholarCross RefCross Ref
  12. Alexandra Chouldechova. 2016. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv preprint arXiv:1610.07524 (2016).Google ScholarGoogle Scholar
  13. Angèle Christin, Alex Rosenblat, and danah boyd. 2015. Courts and Predictive Algorithms. Data & Civil Rights: Criminal Justice and Civil Rights Primer (2015).Google ScholarGoogle Scholar
  14. Mona Danner, Marie VanNostrand, and Lisa Spruance August, 2015. Risk-Based Pretrial Release Recommendation and Supervision Guidelines.(August, 2015).Google ScholarGoogle Scholar
  15. Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ACM, 214--226. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 259--268. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Benjamin Fish, Jeremy Kun, and Ádám Dániel Lelkes. 2016. A Confidence-Based Approach for Balancing Fairness and Accuracy. CoRR Vol. abs/1601.05764 (2016).Google ScholarGoogle Scholar
  18. Fisher v. University of Texas at Austin. 2016. Vol. 136 S. Ct. 2198, 2208 (2016).Google ScholarGoogle Scholar
  19. Sharad Goel, Justin Rao, and Ravi Shroff 2016natexlaba. Personalized Risk Assessments in the Criminal Justice System. The American Economic Review Vol. 106, 5 (2016), 119--123. Google ScholarGoogle ScholarCross RefCross Ref
  20. Sharad Goel, Justin M Rao, and Ravi Shroff 2016natexlabb. Precinct or Prejudice? Understanding Racial Disparities in New York City's Stop-and-Frisk Policy. Annals of Applied Statistics Vol. 10, 1 (2016), 365--394. Google ScholarGoogle ScholarCross RefCross Ref
  21. Sara Hajian, Josep Domingo-Ferrer, and Antoni Martinez-Balleste. 2011. Discrimination prevention in data mining for intrusion and crime detection Computational Intelligence in Cyber Security (CICS), 2011 IEEE Symposium on. IEEE, 47--54.Google ScholarGoogle Scholar
  22. Moritz Hardt, Eric Price, and Nati Srebro 2016. Equality of opportunity in supervised learning. In Advances In Neural Information Processing Systems. 3315--3323.Google ScholarGoogle Scholar
  23. Eric Holder. 2014. (2014). Remarks at the National Association of Criminal Defense Lawyers 57th Annual Meeting.Google ScholarGoogle Scholar
  24. Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. 2016. Fair Learning in Markovian Environments. arXiv preprint arXiv:1611.03071 (2016).Google ScholarGoogle Scholar
  25. Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth 2016. Fairness in Learning: Classic and Contextual Bandits. Advances in Neural Information Processing Systems 29. Curran Associates, Inc., 325--333.Google ScholarGoogle Scholar
  26. Jongbin Jung, Connor Concannon, Ravi Shroff, Sharad Goel, and Daniel G. Goldstein. 2017. Simple rules for complex decisions. (2017). Working paper.Google ScholarGoogle Scholar
  27. Faisal Kamiran, Indré Žliobaité, and Toon Calders. 2013. Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and information systems Vol. 35, 3 (2013), 613--644. Google ScholarGoogle ScholarCross RefCross Ref
  28. Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma 2012. Fairness-aware classifier with prejudice remover regularizer Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 35--50.Google ScholarGoogle Scholar
  29. Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent trade-offs in the fair determination of risk scores. Proceedings of Innovations in Theoretical Computer Science (ITCS) (2017).Google ScholarGoogle Scholar
  30. Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica (5 2016).Google ScholarGoogle Scholar
  31. Kristian Lum and William Isaac 2016. To predict and serve? Significance, Vol. 13, 5 (2016), 14--19. Google ScholarGoogle ScholarCross RefCross Ref
  32. J Monahan and JL Skeem 2016. Risk Assessment in Criminal Sentencing. Annual review of clinical psychology Vol. 12 (2016), 489. Google ScholarGoogle ScholarCross RefCross Ref
  33. Scott E Page. 2008. The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton University Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Emma Pierson, Sam Corbett-Davies, and Sharad Goel. 2017. Fast threshold tests for detecting discrimination. (2017). Working paper available at https://arxiv.org/abs/1702.08536.Google ScholarGoogle Scholar
  35. John C. Platt. 1999. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. In Advances in Large Margin Classifiers. 61--74.Google ScholarGoogle Scholar
  36. Andrea Romei and Salvatore Ruggieri 2014. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review Vol. 29, 05 (2014), 582--638. Google ScholarGoogle ScholarCross RefCross Ref
  37. Camelia Simoiu, Sam Corbett-Davies, and Sharad Goel. 2017. The problem of infra-marginality in outcome tests for discrimination. Annals of Applied Statistics (2017). Forthcoming.Google ScholarGoogle Scholar
  38. Jennifer L. Skeem and Christopher T. Lowencamp 2016. Risk, Race, and Recidivism: Predictive Bias and Disparate Impact. Criminology, Vol. 54, 4 (2016), 680--712. Google ScholarGoogle ScholarCross RefCross Ref
  39. Sonja B Starr. 2014. Evidence-based sentencing and the scientific rationalization of discrimination. Stan. L. Rev. Vol. 66 (2014), 803.Google ScholarGoogle Scholar
  40. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi 2017. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. In Proceedings of the 26th International World Wide Web Conference. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork 2013. Learning Fair Representations. In Proceedings of The 30th International Conference on Machine Learning. 325--333.Google ScholarGoogle Scholar
  42. Indré Žliobaité 2017. Measuring discrimination in algorithmic decision making Data Mining and Knowledge Discovery (2017).Google ScholarGoogle Scholar

Index Terms

  1. Algorithmic Decision Making and the Cost of Fairness

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          KDD '17: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
          August 2017
          2240 pages
          ISBN:9781450348874
          DOI:10.1145/3097983

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 4 August 2017

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          KDD '17 Paper Acceptance Rate64of748submissions,9%Overall Acceptance Rate1,133of8,635submissions,13%

          Upcoming Conference

          KDD '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader