ABSTRACT
Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.
- Lindsay C Ahlman and Ellen M Kurtz 2009. The APPD randomized controlled trial in low risk supervision: The effects on low risk supervision on rearrest. Philadelphia Adult Probation and Parole Department (2009).Google Scholar
- Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner 2016. Machine bias: There's software used across the country to predict future criminals. and it's biased against blacks. ProPublica (5 2016).Google Scholar
- Shamena Anwar and Hanming Fang 2006. An Alternative Test of Racial Prejudice in Motor Vehicle Searches: Theory and Evidence. The American Economic Review (2006).Google Scholar
- Ian Ayres. 2002. Outcome tests of racial disparities in police practices. Justice research and Policy Vol. 4, 1--2 (2002), 131--142.Google Scholar
- Solon Barocas and Andrew D Selbst 2016. Big data's disparate impact. California Law Review Vol. 104 (2016).Google Scholar
- Gary S Becker. 1957. The economics of discrimination. University of Chicago Press.Google Scholar
- Richard Berk. 2009. The role of race in forecasts of violent crime. Race and Social Problems (2009), 131--242. Google ScholarCross Ref
- Richard Berk. 2012. Criminal justice forecasts of risk: a machine learning approach. Springer Science & Business Media. Google ScholarCross Ref
- Richard Berk, Hoda Heidari, Shahin Jabbari, Michael Kearns, and Aaron Roth 2017. Fairness in Criminal Justice Risk Assessments: The State of the Art. working paper (2017).Google Scholar
- Richard Berk, Lawrence Sherman, Geoffrey Barnes, Ellen Kurtz, and Lindsay Ahlman 2009. Forecasting murder within a population of probationers and parolees: a high stakes application of statistical learning. Journal of the Royal Statistical Society: Series A (Statistics in Society), Vol. 172, 1 (2009), 191--211. Google ScholarCross Ref
- James A Berkovec, Glenn B Canner, Stuart A Gabriel, and Timothy H Hannan 1994. Race, redlining, and residential mortgage loan performance. The Journal of Real Estate Finance and Economics, Vol. 9, 3 (1994), 263--294.Google ScholarCross Ref
- Alexandra Chouldechova. 2016. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. arXiv preprint arXiv:1610.07524 (2016).Google Scholar
- Angèle Christin, Alex Rosenblat, and danah boyd. 2015. Courts and Predictive Algorithms. Data & Civil Rights: Criminal Justice and Civil Rights Primer (2015).Google Scholar
- Mona Danner, Marie VanNostrand, and Lisa Spruance August, 2015. Risk-Based Pretrial Release Recommendation and Supervision Guidelines.(August, 2015).Google Scholar
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ACM, 214--226. Google ScholarDigital Library
- Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 259--268. Google ScholarDigital Library
- Benjamin Fish, Jeremy Kun, and Ádám Dániel Lelkes. 2016. A Confidence-Based Approach for Balancing Fairness and Accuracy. CoRR Vol. abs/1601.05764 (2016).Google Scholar
- Fisher v. University of Texas at Austin. 2016. Vol. 136 S. Ct. 2198, 2208 (2016).Google Scholar
- Sharad Goel, Justin Rao, and Ravi Shroff 2016natexlaba. Personalized Risk Assessments in the Criminal Justice System. The American Economic Review Vol. 106, 5 (2016), 119--123. Google ScholarCross Ref
- Sharad Goel, Justin M Rao, and Ravi Shroff 2016natexlabb. Precinct or Prejudice? Understanding Racial Disparities in New York City's Stop-and-Frisk Policy. Annals of Applied Statistics Vol. 10, 1 (2016), 365--394. Google ScholarCross Ref
- Sara Hajian, Josep Domingo-Ferrer, and Antoni Martinez-Balleste. 2011. Discrimination prevention in data mining for intrusion and crime detection Computational Intelligence in Cyber Security (CICS), 2011 IEEE Symposium on. IEEE, 47--54.Google Scholar
- Moritz Hardt, Eric Price, and Nati Srebro 2016. Equality of opportunity in supervised learning. In Advances In Neural Information Processing Systems. 3315--3323.Google Scholar
- Eric Holder. 2014. (2014). Remarks at the National Association of Criminal Defense Lawyers 57th Annual Meeting.Google Scholar
- Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, and Aaron Roth. 2016. Fair Learning in Markovian Environments. arXiv preprint arXiv:1611.03071 (2016).Google Scholar
- Matthew Joseph, Michael Kearns, Jamie H Morgenstern, and Aaron Roth 2016. Fairness in Learning: Classic and Contextual Bandits. Advances in Neural Information Processing Systems 29. Curran Associates, Inc., 325--333.Google Scholar
- Jongbin Jung, Connor Concannon, Ravi Shroff, Sharad Goel, and Daniel G. Goldstein. 2017. Simple rules for complex decisions. (2017). Working paper.Google Scholar
- Faisal Kamiran, Indré Žliobaité, and Toon Calders. 2013. Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and information systems Vol. 35, 3 (2013), 613--644. Google ScholarCross Ref
- Toshihiro Kamishima, Shotaro Akaho, Hideki Asoh, and Jun Sakuma 2012. Fairness-aware classifier with prejudice remover regularizer Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 35--50.Google Scholar
- Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2017. Inherent trade-offs in the fair determination of risk scores. Proceedings of Innovations in Theoretical Computer Science (ITCS) (2017).Google Scholar
- Jeff Larson, Surya Mattu, Lauren Kirchner, and Julia Angwin. 2016. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica (5 2016).Google Scholar
- Kristian Lum and William Isaac 2016. To predict and serve? Significance, Vol. 13, 5 (2016), 14--19. Google ScholarCross Ref
- J Monahan and JL Skeem 2016. Risk Assessment in Criminal Sentencing. Annual review of clinical psychology Vol. 12 (2016), 489. Google ScholarCross Ref
- Scott E Page. 2008. The difference: How the power of diversity creates better groups, firms, schools, and societies. Princeton University Press.Google ScholarDigital Library
- Emma Pierson, Sam Corbett-Davies, and Sharad Goel. 2017. Fast threshold tests for detecting discrimination. (2017). Working paper available at https://arxiv.org/abs/1702.08536.Google Scholar
- John C. Platt. 1999. Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. In Advances in Large Margin Classifiers. 61--74.Google Scholar
- Andrea Romei and Salvatore Ruggieri 2014. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review Vol. 29, 05 (2014), 582--638. Google ScholarCross Ref
- Camelia Simoiu, Sam Corbett-Davies, and Sharad Goel. 2017. The problem of infra-marginality in outcome tests for discrimination. Annals of Applied Statistics (2017). Forthcoming.Google Scholar
- Jennifer L. Skeem and Christopher T. Lowencamp 2016. Risk, Race, and Recidivism: Predictive Bias and Disparate Impact. Criminology, Vol. 54, 4 (2016), 680--712. Google ScholarCross Ref
- Sonja B Starr. 2014. Evidence-based sentencing and the scientific rationalization of discrimination. Stan. L. Rev. Vol. 66 (2014), 803.Google Scholar
- Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi 2017. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. In Proceedings of the 26th International World Wide Web Conference. Google ScholarDigital Library
- Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork 2013. Learning Fair Representations. In Proceedings of The 30th International Conference on Machine Learning. 325--333.Google Scholar
- Indré Žliobaité 2017. Measuring discrimination in algorithmic decision making Data Mining and Knowledge Discovery (2017).Google Scholar
Index Terms
- Algorithmic Decision Making and the Cost of Fairness
Recommendations
Fairness constraints: a flexible approach for fair classification
Algorithmic decision making is employed in an increasing number of real-world applications to aid human decision making. While it has shown considerable promise in terms of improved decision accuracy, in some scenarios, its outcomes have been also shown ...
Weapons of moral construction? On the value of fairness in algorithmic decision-making
AbstractFairness is one of the most prominent values in the Ethics and Artificial Intelligence (AI) debate and, specifically, in the discussion on algorithmic decision-making (ADM). However, while the need for fairness in ADM is widely acknowledged, the ...
Educating Computer Science Students about Algorithmic Fairness, Accountability, Transparency and Ethics
ITiCSE '21: Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1Professionals are increasingly relying on algorithmic systems for decision making however, algorithmic decisions occasionally perceived as biased or not just. Prior work has provided evidences that education can make a difference on the perception of ...
Comments