skip to main content
10.1145/3442188.3445888acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article

What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness

Published:01 March 2021Publication History

ABSTRACT

As calls for fair and unbiased algorithmic systems increase, so too does the number of individuals working on algorithmic fairness in industry. However, these practitioners often do not have access to the demographic data they feel they need to detect bias in practice. Even with the growing variety of toolkits and strategies for working towards algorithmic fairness, they almost invariably require access to demographic attributes or proxies. We investigated this dilemma through semi-structured interviews with 38 practitioners and professionals either working in or adjacent to algorithmic fairness. Participants painted a complex picture of what demographic data availability and use look like on the ground, ranging from not having access to personal data of any kind to being legally required to collect and use demographic data for discrimination assessments. In many domains, demographic data collection raises a host of difficult questions, including how to balance privacy and fairness, how to define relevant social categories, how to ensure meaningful consent, and whether it is appropriate for private companies to infer someone's demographics. Our research suggests challenges that must be considered by businesses, regulators, researchers, and community groups in order to enable practitioners to address algorithmic bias in practice. Critically, we do not propose that the overall goal of future work should be to simply lower the barriers to collecting demographic data. Rather, our study surfaces a swath of normative questions about how, when, and whether this data should be procured, and, in cases where it is not, what should still be done to mitigate bias.

References

  1. McKane Andrus and Thomas K. Gilbert. 2019. Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Honolulu HI USA, 445--451. https://doi.org/10.1145/3306618.3314275Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. McKane Andrus, Elena Spitzer, and Alice Xiang. 2020. Working to Address Algorithmic Bias? Don't Overlook the Role of Demographic Data. https://www.partnershiponai.org/demographic- data/Google ScholarGoogle Scholar
  3. Jennifer Attride-Stirling. 2001. Thematic networks: an analytic tool for qualitative research. Qualitative Research 1, 3 (Dec. 2001), 385--405. https://doi.org/10.1177/146879410100100307Google ScholarGoogle ScholarCross RefCross Ref
  4. Brooke Auxier, Lee Rainie, Monica Anderson, Andrew Perrin, Madhu Kumar, and Erica Turner. 2019. Americans and privacy: Concerned, confused and feeling lack of control over their personal information. Pew Research Center: Internet, Science & Tech (blog). November 15 (2019), 2019.Google ScholarGoogle Scholar
  5. Chelsea Barabas. 2019. Beyond Bias: Re-Imagining the Terms of 'Ethical AI' in Criminal Law. SSRN Electronic Journal (2019). https://doi.org/10.2139/ssrn.3377921Google ScholarGoogle Scholar
  6. Chelsea Barabas, Madars Virza, Karthik Dinakar, Joichi Ito, and Jonathan Zittrain. 2018. Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. In Conference on Fairness, Accountability and Transparency. PMLR, 62--76.Google ScholarGoogle Scholar
  7. Bissan Barghouti, Corinne Bintz, Dharma Dailey, Micah Epstein, Vivian Guetler, Bernease Herman, Pa Ousman Jobe, Michael Katell, P. M. Krafft, Jennifer Lee, Shankar Narayan, Franziska Putz, Daniella Raz, Brian Robick, Aaron Tam, Abiel Woldu, and Meg Young. 2020. Algorithmic Equity Toolkit. https://www.acluwa.org/AEKitGoogle ScholarGoogle Scholar
  8. Solon Barocas and Andrew D. Selbst. 2016. Big Data's Disparate Impact. California Law Review 104 (2016), 671. https://heinonline.org/HOL/Page?handle=hein.journals/calr104&id=695&div=&collection=Google ScholarGoogle Scholar
  9. Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, and Yunfeng Zhang. 2018. AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. arXiv:1810.01943 [cs] (Oct. 2018). http://arxiv.org/abs/1810.01943Google ScholarGoogle Scholar
  10. Cynthia L. Bennett and Os Keyes. 2020. What Is the Point of Fairness?: Disability, AI and the Complexity of Justice. ACM SIGACCESS Accessibility and Computing 125 (March 2020), 1--1. https://doi.org/10.1145/3386296.3386301Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jason R Bent. 2020. Is Algorithmic Affirmative Action Legal? Georgetown Law Journal 108 (2020), 803.Google ScholarGoogle Scholar
  12. Sebastian Benthall and Bruce D. Haynes. 2019. Racial Categories in Machine Learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* '19. ACM Press, Atlanta, GA, USA, 289--298. https://doi.org/10.1145/3287560.3287575Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Miranda Bogen, Aaron Rieke, and Shazeda Ahmed. 2020. Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 492--500.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Consumer Financial Protection Bureau. 2014. Using publicly available information to proxy for unidentified race and ethnicity. (2014). https://www.consumerfinance.gov/data-research/research-reports/using-publicly-available-information-to-proxy-for-unidentified-race-and-ethnicity/Google ScholarGoogle Scholar
  15. Brandee Butler. 2020. For the EU to Effectively Address Racial Injustice, We Need Data. Al Jazeera.Google ScholarGoogle Scholar
  16. Marika Cifor, Patricia Garcia, TL Cowan, Jasmine Rault, Tonia Sutherland, Anita Say Chan, Jennifer Rode, Anna Lauren Hoffmann, Niloufar Salehi, and Lisa Nakamura. 2019. Feminist data manifest-no. https://www.manifestno.com/Google ScholarGoogle Scholar
  17. US Equal Employment Opportunity Commission et al. 1979. Questions and answers to clarify and provide a common interpretation of the uniform guidelines on employee selection procedures.Google ScholarGoogle Scholar
  18. Sam Corbett-Davies and Sharad Goel. 2018. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv:1808.00023 [cs] (Aug. 2018). http://arxiv.org/abs/1808.00023Google ScholarGoogle Scholar
  19. Kimberlé Crenshaw. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. u. Chi. Legal f. (1989), 139.Google ScholarGoogle Scholar
  20. Cara Crotty. 2020. Revised form for self-identification of disability released. https://www.constangy.com/affirmative-action-alert/revised-form-forself-identification-of-disabilityGoogle ScholarGoogle Scholar
  21. d4bl. 2020. Data 4 Black Lives. https://d4bl.org/Google ScholarGoogle Scholar
  22. Roel Dobbe, Sarah Dean, Thomas Gilbert, and Nitin Kohli. 2018. A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. arXiv:1807.00553 [cs, math, stat] (July 2018). arXiv:1807.00553 [cs, math, stat] http://arxiv.org/abs/1807.00553Google ScholarGoogle Scholar
  23. Alexander D'Amour, Hansa Srinivasan, James Atwood, Pallavi Baljekar, D. Sculley, and Yoni Halpern. 2020. Fairness is not static: Deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 conference on fairness, accountability, and transparency (FAccT '20). Association for Computing Machinery, New York, NY, USA, 525--534. https://doi.org/10.1145/3351095.3372878Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Marc N Elliott, Peter A Morrison, Allen Fremont, Daniel F McCaffrey, Philip Pantoja, and Nicole Lurie. 2009. Using the Census Bureau's surname list to improve estimates of race/ethnicity and associated disparities. Health Services and Outcomes Research Methodology 9, 2 (2009), 69.Google ScholarGoogle ScholarCross RefCross Ref
  25. European Parliament and Council of European Union. 2016. Regulation (EU) 2016/679 (General Data Protection Regulation). https://eur-lex.europa.eu/legalcontent/EN/TXT/HTML/?uri=CELEX:32016R0679&from=ENGoogle ScholarGoogle Scholar
  26. German Bundestag 2017. Federal Data Protection Act of 30 June 2017 (BDSG)., 2097 pages. https://www.gesetze-im-internet.de/englisch_bdsg/englisch_bdsg.htmlGoogle ScholarGoogle Scholar
  27. Soheil Ghili, Ehsan Kazemi, and Amin Karbasi. 2019. Eliminating latent discrimination: Train then mask. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 3672--3680.Google ScholarGoogle Scholar
  28. Bryce W Goodman. 2016. A step towards accountable algorithms? algorithmic discrimination and the european union general data protection. In 29th conference on Neural Information Processing Systems (NIPS 2016), Barcelona. NIPS foundation.Google ScholarGoogle Scholar
  29. Ben Green and Salomé Viljoen. 2020. Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 19--31. https://doi.org/10.1145/3351095.3372840Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Maya Gupta, Andrew Cotter, Mahdi Milani Fard, and Serena Wang. 2018. Proxy Fairness. arXiv:1806.11212 [cs, stat] (June 2018). arXiv:1806.11212 [cs, stat] http://arxiv.org/abs/1806.11212Google ScholarGoogle Scholar
  31. Sara Hajian, Josep Domingo-Ferrer, Anna Monreale, Dino Pedreschi, and Fosca Giannotti. 2015. Discrimination- and Privacy-Aware Patterns. Data Mining and Knowledge Discovery 29, 6 (Nov. 2015), 1733--1782. https://doi.org/10.1007/s10618-014-0393-7Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham. 2018. Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM Press, Montreal QC, Canada, 1--13. https://doi.org/10.1145/3173574.3173582Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 501--512.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Donna Haraway. 1988. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies 14, 3 (1988), 575--599. https://doi.org/10.2307/3178066Google ScholarGoogle ScholarCross RefCross Ref
  35. Zach Harned and Hanna Wallach. 2019. Stretching human laws to apply to machines: The dangers of a 'Colorblind' Computer. Florida State University Law Review, Forthcoming (2019).Google ScholarGoogle Scholar
  36. Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. arXiv:1806.08010 [cs, stat] (July 2018). arXiv:1806.08010 [cs, stat] http://arxiv.org/abs/1806.08010Google ScholarGoogle Scholar
  37. Anna Lauren Hoffmann. 2019. Where Fairness Fails: Data, Algorithms, and the Limits of Antidiscrimination Discourse. Information, Communication & Society 22, 7 (June 2019), 900--915. https://doi.org/10.1080/1369118X.2019.1573912Google ScholarGoogle ScholarCross RefCross Ref
  38. Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19 (2019), 1--16. https://doi.org/10.1145/3290605.3300830 arXiv:1812.05239Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Junia Howell and Michael O. Emerson. 2017. So What "Should" We Use? Evaluating the Impact of Five Racial Measures on Markers of Social Inequality. Sociology of Race and Ethnicity 3, 1 (Jan. 2017), 14--30. https://doi.org/10.1177/2332649216648465Google ScholarGoogle Scholar
  40. Lily Hu and Issa Kohler-Hausmann. 2020. What's sex got to do with machine learning?. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 513--513.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Institute of Medicine (US) Committee on Quality of Health Care in America. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academies Press (US), Washington (DC). http://www.ncbi.nlm.nih.gov/books/NBK222274/Google ScholarGoogle Scholar
  42. Abigail Z. Jacobs and Hanna Wallach. 2019. Measurement and Fairness. arXiv:1912.05511 [cs] (Dec. 2019). arXiv:1912.05511 [cs] http://arxiv.org/abs/1912.05511Google ScholarGoogle Scholar
  43. Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. 2019. Differentially private fair learning. In International Conference on Machine Learning. PMLR, 3000--3008.Google ScholarGoogle Scholar
  44. LLana James. 2020. Race-Based COVID-19 Data May Be Used to Discriminate against Racialized Communities. http://theconversation.com/race-based-covid-19-data-may-be-used-to-discriminate-against-racialized-communities-138372Google ScholarGoogle Scholar
  45. Peter Kairouz and Others. 2019. Advances and Open Problems in Federated Learning. arXiv:1912.04977 [cs, stat] (Dec. 2019). arXiv:1912.04977 [cs, stat] http://arxiv.org/abs/1912.04977Google ScholarGoogle Scholar
  46. Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. 2011. Fairness-aware Learning through Regularization Approach. In 2011 IEEE 11th International Conference on Data Mining Workshops. 643--650. https://doi.org/10.1109/ICDMW.2011.83Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Binz, Daniella Raz, and P. M. Krafft. 2020. Toward Situated Interventions for Algorithmic Equity: Lessons from the Field. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 45--55. https://doi.org/10.1145/3351095.3372874Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Niki Kilbertus, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi, and Adrian Weller. 2018. Blind Justice: Fairness with Encrypted Sensitive Attributes. arXiv:1806.03281 [cs, stat] (June 2018). arXiv:1806.03281 [cs, stat] http://arxiv.org/abs/1806.03281Google ScholarGoogle Scholar
  49. Satya Kuppam, Ryan Mckenna, David Pujol, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. 2020. Fair Decision Making Using Privacy-Protected Data. arXiv:1905.12744 [cs] (Jan. 2020). arXiv:1905.12744 [cs] http://arxiv.org/abs/1905.12744Google ScholarGoogle Scholar
  50. Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H. Chi. 2020. Fairness without Demographics through Adversarially Reweighted Learning. arXiv:2006.13114 [cs, stat] (June 2020). arXiv:2006.13114 [cs, stat] http://arxiv.org/abs/2006.13114Google ScholarGoogle Scholar
  51. Issie Lapowsky. 2019. How Cambridge Analytica Sparked the Great Privacy Awakening. Wired (March 2019). https://www.wired.com/story/cambridge-analytica-facebook-privacy-awakening/Google ScholarGoogle Scholar
  52. LinkedIn. [n.d.]. LinkedIn Recruiter: The Industry-Standard Recruiting Tool. https://business.linkedin.com/talent-solutions/recruiterGoogle ScholarGoogle Scholar
  53. Efrider Maramwidze-Merrison. 2016. Innovative Methodologies in Qualitative Research: Social Media Window for Accessing Organisational Elites for Interviews. 12, 2 (2016), 11.Google ScholarGoogle Scholar
  54. Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, and William S. Isaac. 2020. Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics. arXiv:2005.07572 [cs, stat] (May 2020). arXiv:2005.07572 [cs, stat] http://arxiv.org/abs/2005.07572Google ScholarGoogle Scholar
  55. Microsoft. 2020. Fairlearn. https://github.com/fairlearn/fairlearnGoogle ScholarGoogle Scholar
  56. Stefania Milan and Emiliano Treré. 2019. Big Data from the South(s): Beyond Data Universalism. Television & New Media 20, 4 (May 2019), 319--335. https://doi.org/10.1177/1527476419837739Google ScholarGoogle ScholarCross RefCross Ref
  57. Deirdre K. Mulligan, Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong. 2019. This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1--36. https://doi.org/10.1145/3359221 arXiv:1909.11869Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Mimi Onuoha. 2020. When Proof Is Not Enough. https://fivethirtyeight.com/features/when-proof-is-not-enough/Google ScholarGoogle Scholar
  59. Tawana Petty, Mariella Saba, Tamika Lewis, Seeta Peña Gangadharan, and Virginia Eubanks. 2018. Our Data Bodies: Reclaiming Our Data. June 15 (2018), 37.Google ScholarGoogle Scholar
  60. Stephanie Carroll Rainie, Tahu Kukutai, Maggie Walter, Oscar Luis Figueroa-Rodríguez, Jennifer Walker, and Per Axelsson. 2019. Indigenous data sovereignty. The State of Open Data: Histories and Horizons (2019), 300.Google ScholarGoogle Scholar
  61. Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2020. Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. arXiv:2006.12358 [cs] (July 2020). arXiv:2006.12358 [cs] http://arxiv.org/abs/2006.12358Google ScholarGoogle Scholar
  62. Nani Jansen Reventlow. [n.d.]. Data collection is not the solution for Europe's racism problem. https://www.aljazeera.com/opinions/2020/7/29/data-collection-is-not-the-solution-for-europes-racism-problem/?gb=trueGoogle ScholarGoogle Scholar
  63. Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, and Adam Tauman Kalai. 2019. What's in a Name? Reducing Bias in Bios without Access to Protected Attributes. arXiv:1904.05233 [cs, stat] (April 2019). arXiv:1904.05233 [cs, stat] http://arxiv.org/abs/1904.05233Google ScholarGoogle Scholar
  64. Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T. Rodolfa, and Rayid Ghani. 2019. Aequitas: A Bias and Fairness Audit Toolkit. arXiv:1811.05577 [cs] (April 2019). http://arxiv.org/abs/1811.05577Google ScholarGoogle Scholar
  65. P. Sattigeri, S. C. Hoffman, V. Chenthamarakshan, and K. R. Varshney. 2019. Fairness GAN: Generating Datasets with Fairness Properties Using a Generative Adversarial Network. IBM Journal of Research and Development 63, 4/5 (July 2019), 3:1-3:9. https://doi.org/10.1147/JRD.2019.2945519Google ScholarGoogle ScholarCross RefCross Ref
  66. Morgan Klaus Scheuerman, Kandrea Wade, Caitlin Lustig, and Jed R. Brubaker. 2020. How We've Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (May 2020), 1--35. https://doi.org/10.1145/3392866Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency -FAT* '19. ACM Press, Atlanta, GA, USA, 59--68. https://doi.org/10.1145/3287560.3287598Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Suranga Seneviratne. 2019. The Ugly Truth: Tech Companies Are Tracking and Misusing Our Data, and There's Little We Can Do. http://theconversation.com/the-ugly-truth-tech-companies-are-tracking-and-misusing-our-data-and-theres-little-we-can-do-127444Google ScholarGoogle Scholar
  69. Sachil Singh. 2020. Collecting race-based data during pandemic may fuel dangerous prejudices. https://www.queensu.ca/gazette/stories/collecting-race-based-data-during-pandemic-may-fuel-dangerous-prejudicesGoogle ScholarGoogle Scholar
  70. Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, and Peter Flach. 2020. FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. Journal of Open Source Software 5, 49 (2020), 1904. https://doi.org/10.21105/joss.01904Google ScholarGoogle ScholarCross RefCross Ref
  71. Linnet Taylor. 2017. What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally. Big Data & Society 4, 2 (Dec. 2017), 205395171773633. https://doi.org/10.1177/2053951717736335Google ScholarGoogle ScholarCross RefCross Ref
  72. Alexander Tischbirek. 2020. Artificial intelligence and discrimination: Discriminating against discriminatory systems. In Regulating artificial intelligence. Springer, 103--121.Google ScholarGoogle Scholar
  73. UK Information Commissioner's Office. 2020. What do we need to do to ensure lawfulness, fairness, and transparency in AI systems? https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection- themes/guidance-on-ai-and-data-protection/what-do-we-need-to-do-to-ensure-lawfulness-fairness-and-transparency-in-ai-systems/Google ScholarGoogle Scholar
  74. Sriram Vasudevan and Krishnaram Kenthapadi. 2020. LiFT: A Scalable Framework for Measuring Fairness in ML Applications. arXiv:2008.07433 [cs] (Aug. 2020). https://doi.org/10.1145/3340531.3412705Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Michael Veale and Reuben Binns. 2017. Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data. Big Data & Society 4, 2 (Dec. 2017), 205395171774353. https://doi.org/10.1177/2053951717743530Google ScholarGoogle ScholarCross RefCross Ref
  76. Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM Press, Montreal QC, Canada, 1--14. https://doi.org/10.1145/3173574.3174014Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Salome Viljoen. 2020. Democratic Data: A Relational Theory For Data Governance. SSRN Scholarly Paper ID 3727562. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3727562Google ScholarGoogle Scholar
  78. James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2020. The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics 26, 1 (Jan. 2020), 56--65. https://doi.org/10.1109/TVCG.2019.2934619Google ScholarGoogle Scholar
  79. Williams, Brooks, and Shmargad. 2018. How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications. Journal of Information Policy 8 (2018), 78. https://doi.org/10.5325/jinfopoli.8.2018.0078Google ScholarGoogle ScholarCross RefCross Ref
  80. Orice M Williams. 2008. Fair Lending: Race and Gender Data are Limited for Non-Mortgage Lending. Subcommittee on Oversight and Investigations, Committee on Financial Services, House of Representatives (2008). arXiv:GAO-08-1023TGoogle ScholarGoogle Scholar
  81. Pak-Hang Wong. 2020. Democratizing Algorithmic Fairness. Philosophy & Technology 33, 2 (June 2020), 225--244. https://doi.org/10.1007/s13347-019-00355-wGoogle ScholarGoogle ScholarCross RefCross Ref
  82. Alice Xiang. 2021. Reconciling legal and technical approaches to algorithmic bias. Tennessee Law Review 88, 3 (2021).Google ScholarGoogle Scholar
  83. Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. 2017. Fairness Constraints: Mechanisms for Fair Classification. In Artificial Intelligence and Statistics. PMLR, 962--970. http://proceedings.mlr.press/v54/zafar17a.htmlGoogle ScholarGoogle Scholar
  84. Tal Z Zarsky. 2014. Understanding discrimination in the scored society. Washington Law Review 89 (2014), 1375.Google ScholarGoogle Scholar
  85. Yan Zhang. 2016. Assessing Fair Lending Risks Using Race/Ethnicity Proxies. Management Science 64, 1 (Nov. 2016), 178--197. https://doi.org/10.1287/mnsc.2016.2579Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Indrė Žliobaitė and Bart Custers. 2016. Using Sensitive Personal Data May Be Necessary for Avoiding Discrimination in Data-Driven Decision Models. Artificial Intelligence and Law 24, 2 (June 2016), 183--201. https://doi.org/10.1007/s10506-016-9182-5Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
          March 2021
          899 pages
          ISBN:9781450383097
          DOI:10.1145/3442188

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 1 March 2021

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Upcoming Conference

          FAccT '24

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader