ABSTRACT
Motivated by concerns surrounding the fairness effects of sharing and transferring fair machine learning tools, we propose two algorithms: Fairness Warnings and Fair-MAML. The first is a model-agnostic algorithm that provides interpretable boundary conditions for when a fairly trained model may not behave fairly on similar but slightly different tasks within a given domain. The second is a fair meta-learning approach to train models that can be quickly fine-tuned to specific tasks from only a few number of sample instances while balancing fairness and accuracy. We demonstrate experimentally the individual utility of each model using relevant baselines and provide the first experiment to our knowledge of K-shot fairness, i.e. training a fair model on a new task with only K data points. Then, we illustrate the usefulness of both algorithms as a combined method for training models from a few data points on new tasks while using Fairness Warnings as interpretable boundary conditions under which the newly trained model may not be fair.
- 2011. Acquisition, preservation, and exchange of identification records and information; appointment of officials. U.S. Code §534 (2011).Google Scholar
- 2013. Developing a National Model for Pretrial Risk Assessment. LJAF Research Summary (2013).Google Scholar
- Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias. ProPublica (2016).Google Scholar
- Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2018. Fairness and Machine Learning. fairmlbook.org.Google Scholar
- Solon Barocas and Andrew D Selbst. 2016. Big data's disparate impact. Calif. L. Rev. 104 (2016), 671.Google Scholar
- Richard Berk, Hoda Heidari, Shahin Jabbari, Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2017. A Convex Framework for Fair Regression. ArXiv (2017).Google Scholar
- Steffen Bickel, Michael Brückner, and Tobias Scheffer. 2009. Discriminative Learning Under Covariate Shift. J. Mach. Learn. Res. 10 (2009), 2137--2155.Google ScholarDigital Library
- Toon Calders and Sicco Verwer. 2010. Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery 21, 2 (2010), 277--292.Google ScholarDigital Library
- Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data 5, 2 (2017), 153--163.Google Scholar
- Alexandra Chouldechova and Aaron Roth. 2018. The Frontiers of Fairness in Machine Learning. ArXiv (2018).Google Scholar
- Angèle Christin. 2017. Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society 4, 2 (2017).Google Scholar
- Amanda Coston, Karthikeyan Natesan Ramamurthy, Dennis Wei, Kush R Varshney, Skyler Speakman, Zairah Mustahsan, and Supriyo Chakraborty. 2019. Fair transfer learning with missing protected attributes. In Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, Honolulu, HI, USA.Google ScholarDigital Library
- Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. 2012. Fairness Through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (ITCS '12). ACM, New York, NY, USA, 214--226.Google ScholarDigital Library
- Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Max Leiserson. 2018. Decoupled classifiers for group-fair and efficient machine learning. In Conference on Fairness, Accountability and Transparency. 119--133.Google Scholar
- The U.S. EEOC. 1979. Uniform guidelines on employee selection procedures. (1979).Google Scholar
- Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 259--268.Google ScholarDigital Library
- Chelsea Finn, Pieter Abbeel, and Sergey Levine. 2017. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. In Proceedings of the 34th International Conference on Machine Learning (Proceedings of Machine Learning Research), Doina Precup and Yee Whye Teh (Eds.), Vol. 70. PMLR, International Convention Centre, Sydney, Australia, 1126--1135.Google Scholar
- Friedler, Scheidegger, Venkatasubramanian, Choudhary, Hamilton, and Roth. 2019. A comparative study of fairness-enhancing interventions in machine learning. In ACM Conference on Fairness, Accountability and Transparency (FAT*). ACM.Google Scholar
- Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS'16). Curran Associates Inc., USA, 3323--3331.Google ScholarDigital Library
- Lingxiao Huang and Nisheeth Vishnoi. 2019. Stable and Fair Classification. In Proceedings of the 36th International Conference on Machine Learning (Proceedings of Machine Learning Research), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.), Vol. 97. PMLR, Long Beach, California, USA, 2879--2890.Google Scholar
- Nathan Kallus and Angela Zhou. 2018. Residual Unfairness in Fair Machine Learning from Prejudiced Data. In Proceedings of the 35th International Conference on Machine Learning (Proceedings of Machine Learning Research), Jennifer Dy and Andreas Krause (Eds.), Vol. 80. PMLR, Stockholmsmässan, Stockholm Sweden, 2439--2448.Google Scholar
- Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. 2012. Fairness-aware classifier with prejudice remover regularizer. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 35--50.Google ScholarCross Ref
- Chao Lan and Jun Huan. 2017. Discriminatory Transfer. Workshop on Fairness, Accountability, and Transparency in Machine Learning (2017).Google Scholar
- Moshe Lichman. 2013. UCI machine learning repository. (2013).Google Scholar
- Zachary C. Lipton, Yu-Xiang Wang, and Alexander J. Smola. 2018. Detecting and Correcting for Label Shift with Black Box Predictors. ICML (2018).Google Scholar
- David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. 2018. Learning Adversarially Fair and Transferable Representations. International Conference on Machine Learning (2018).Google Scholar
- Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016. 1135--1144.Google ScholarDigital Library
- Andrea Romei and Salvatore Ruggieri. 2014. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review 29, 5 (2014), 582--638.Google ScholarCross Ref
- Candice Schumann, Xuezhi Wang, Alex Beutel, Jilin Chen, Hai Qian, and Ed H. Chi. 2019. Transfer of Machine Learning Fairness across Domains. arXiv (2019).Google Scholar
- Dylan Slack, Sorelle A. Friedler, Carlos Eduardo Scheidegger, and Chitradeep Dutta Roy. 2019. Assessing the Local Interpretability of Machine Learning Models. Workshop on Human-Centric Machine Learning, NeurIPS (2019).Google Scholar
- Megan T. Stevenson. 2017. Assessing Risk Assessment in Action. 103 Minnesota Law Review 303 (2017).Google Scholar
- Adarsh Subbaswamy, Peter G. Schulam, and Suchi Saria. 2018. Preventing Failures Due to Dataset Shift: Learning Predictive Models That Transport. In AISTATS.Google Scholar
- Berk Ustun and Cynthia Rudin. 2015. Supersparse linear integer models for optimized medical scoring systems. Machine Learning 102 (2015), 349--391.Google ScholarDigital Library
- Berk Ustun and Cynthia Rudin. 2019. Learning Optimized Risk Scores. Journal of Machine Learning Research 20, 150 (2019), 1--75.Google Scholar
- Joaquin Vanschoren. 2019. Meta-Learning. Springer International Publishing, Cham, 35--61.Google Scholar
- Oriol Vinyals, Charles Blundell, Timothy P. Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. 2016. Matching Networks for One Shot Learning. In NeurIPS.Google Scholar
- Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web. 1171--1180.Google ScholarDigital Library
- Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. 2017. Fairness Constraints: Mechanisms for Fair Classification. In Artificial Intelligence and Statistics. 962--970.Google Scholar
- Indre Zliobaite. 2015. A survey on measuring indirect discrimination in machine learning. arXiv (2015).Google Scholar
Index Terms
- Fairness warnings and fair-MAML: learning fairly with minimal data
Recommendations
Airtime Fairness for IEEE 802.11 Multirate Networks
Under a multi rate network scenario, the IEEE 802.11 DCF MAC fails to provide air-time fairness for all competing stations since the protocol is designed for ensuring max-min throughput fairness and the maximum achievable throughput by any station gets ...
Fairness in multi-hop wireless backhaul networks: a dynamic estimation approach
QShine '08: Proceedings of the 5th International ICST Conference on Heterogeneous Networking for Quality, Reliability, Security and RobustnessIn this work, we consider the problem of fairness for Transit Access Points (TAP) in multi-hop wireless backhaul networks. Existing approaches are not practical due to the requirement for modifications to the MAC layer or queueing operations of TAPs, or ...
Fair round robin binary countdown to achieve QoS guarantee and fairness in WLANs
How to guarantee both quality of service (QoS) and fairness in wireless local area networks (WLANs) is a challenging issue. To touch this issue, a fair medium access control (MAC) scheme called fair round robin binary countdown (FRRBC) adopting the ...
Comments