skip to main content
10.1145/3308560.3320086acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned

Published:13 May 2019Publication History

ABSTRACT

Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial aims to present an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a “fairness-first” approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice, by highlighting industry best practices and case studies from different technology companies. Based on our experiences in industry, we will identify open problems and research challenges for the data mining / machine learning community.

References

  1. J. Angwin, J. Larson, S. Mattu, and L. Kirchner. Machine bias. ProPublica, 2016.Google ScholarGoogle Scholar
  2. S. Barocas and M. Hardt. Fairness in machine learning. In NIPS Tutorial, 2017.Google ScholarGoogle Scholar
  3. A. Caliskan, J. J. Bryson, and A. Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 2017.Google ScholarGoogle Scholar
  4. L. E. Celis, D. Straszak, and N. K. Vishnoi. Ranking with fairness constraints. In ICALP, 2018.Google ScholarGoogle Scholar
  5. S. Corbett-Davies, E. Pierson, A. Feller, S. Goel, and A. Huq. Algorithmic decision making and the cost of fairness. In KDD, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Dwork, M. Hardt, T. Pitassi, and R. Z. Omer Reingold. Fairness through awareness. In ITCS, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. S. A. Friedler, C. Scheidegger, and S. Venkatasubramanian. On the (im) possibility of fairness. arXiv:1609.07236, 2016.Google ScholarGoogle Scholar
  8. S. A. Friedler, C. Scheidegger, S. Venkatasubramanian, S. Choudhary, E. P. Hamilton, and D. Roth. A comparative study of fairness-enhancing interventions in machine learning. arXiv:1802.04422, 2018.Google ScholarGoogle Scholar
  9. B. Friedman and H. Nissenbaum. Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3), 1996. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. S. C. Geyik and K. Kenthapadi. Building representative talent search at LinkedIn. LinkedIn engineering blog post, Available at https://engineering.linkedin.com/blog/2018/10/building-representative-talent-search-at-linkedin, October 2018.Google ScholarGoogle Scholar
  11. S. Hajian, F. Bonchi, and C. Castillo. Algorithmic bias: From discrimination discovery to fairness-aware data mining. In KDD Tutorial on Algorithmic Bias, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. S. Hajian, J. Domingo-Ferrer, and O. Farràs. Generalization-based privacy preservation and discrimination prevention in data publishing and mining. Data Mining and Knowledge Discovery, 28(5-6), 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. M. Hardt, E. Price, and N. Srebro. Equality of opportunity in supervised learning. In NIPS, 2016. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. S. Jabbari, M. Joseph, M. Kearns, J. Morgenstern, and A. Roth. Fairness in reinforcement learning. In ICML, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. J. Kleinberg, S. Mullainathan, and M. Raghavan. Inherent trade-offs in the fair determination of risk scores. In ITCS, 2017.Google ScholarGoogle Scholar
  16. D. Pedreschi, S. Ruggieri, and F. Turini. Discrimination-aware data mining. In KDD, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. B. Woodworth, S. Gunasekar, M. I. Ohannessian, and N. Srebro. Learning non-discriminatory predictors. In COLT, 2017.Google ScholarGoogle Scholar
  18. M. B. Zafar, I. Valera, M. Gomez Rodriguez, and K. P. Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In WWW, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. M. Zehlike, F. Bonchi, C. Castillo, S. Hajian, M. Megahed, and R. Baeza-Yates. FA*IR: A fair top-k ranking algorithm. In CIKM, 2017. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. Learning fair representations. In ICML, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Reviews

Jonathan P. E. Hodgson

This is a timely paper in light of recent stories about bias in artificial intelligence (AI) systems, such as the COMPAS system used in Florida to predict recidivism. The tutorial's aim is to describe what the authors call a "fairness-first approach" to machine learning. This is similar to a security-first view of system construction, that is, fairness should be built in from the start rather than bolted on later. The tutorial covers industry best practices, sources of bias, algorithmic techniques for fairness, and fairness methods in practice. The notion of fairness is discussed along with various definitions, for example, individual and group fairness. This includes fairness in ranking users for things like credit offers. A bibliography provides further references on this important topic. For those who missed the conference tutorial, and because this paper is just a brief invitation to it, readers should refer to the papers listed in the bibliography for more comprehensive introductions to the topic of fairness in machine learning.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    WWW '19: Companion Proceedings of The 2019 World Wide Web Conference
    May 2019
    1331 pages
    ISBN:9781450366755
    DOI:10.1145/3308560

    Copyright © 2019 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 May 2019

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate1,899of8,196submissions,23%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format