ABSTRACT
As calls for fair and unbiased algorithmic systems increase, so too does the number of individuals working on algorithmic fairness in industry. However, these practitioners often do not have access to the demographic data they feel they need to detect bias in practice. Even with the growing variety of toolkits and strategies for working towards algorithmic fairness, they almost invariably require access to demographic attributes or proxies. We investigated this dilemma through semi-structured interviews with 38 practitioners and professionals either working in or adjacent to algorithmic fairness. Participants painted a complex picture of what demographic data availability and use look like on the ground, ranging from not having access to personal data of any kind to being legally required to collect and use demographic data for discrimination assessments. In many domains, demographic data collection raises a host of difficult questions, including how to balance privacy and fairness, how to define relevant social categories, how to ensure meaningful consent, and whether it is appropriate for private companies to infer someone's demographics. Our research suggests challenges that must be considered by businesses, regulators, researchers, and community groups in order to enable practitioners to address algorithmic bias in practice. Critically, we do not propose that the overall goal of future work should be to simply lower the barriers to collecting demographic data. Rather, our study surfaces a swath of normative questions about how, when, and whether this data should be procured, and, in cases where it is not, what should still be done to mitigate bias.
- McKane Andrus and Thomas K. Gilbert. 2019. Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program for Machine Learning. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. ACM, Honolulu HI USA, 445--451. https://doi.org/10.1145/3306618.3314275Google ScholarDigital Library
- McKane Andrus, Elena Spitzer, and Alice Xiang. 2020. Working to Address Algorithmic Bias? Don't Overlook the Role of Demographic Data. https://www.partnershiponai.org/demographic- data/Google Scholar
- Jennifer Attride-Stirling. 2001. Thematic networks: an analytic tool for qualitative research. Qualitative Research 1, 3 (Dec. 2001), 385--405. https://doi.org/10.1177/146879410100100307Google ScholarCross Ref
- Brooke Auxier, Lee Rainie, Monica Anderson, Andrew Perrin, Madhu Kumar, and Erica Turner. 2019. Americans and privacy: Concerned, confused and feeling lack of control over their personal information. Pew Research Center: Internet, Science & Tech (blog). November 15 (2019), 2019.Google Scholar
- Chelsea Barabas. 2019. Beyond Bias: Re-Imagining the Terms of 'Ethical AI' in Criminal Law. SSRN Electronic Journal (2019). https://doi.org/10.2139/ssrn.3377921Google Scholar
- Chelsea Barabas, Madars Virza, Karthik Dinakar, Joichi Ito, and Jonathan Zittrain. 2018. Interventions over predictions: Reframing the ethical debate for actuarial risk assessment. In Conference on Fairness, Accountability and Transparency. PMLR, 62--76.Google Scholar
- Bissan Barghouti, Corinne Bintz, Dharma Dailey, Micah Epstein, Vivian Guetler, Bernease Herman, Pa Ousman Jobe, Michael Katell, P. M. Krafft, Jennifer Lee, Shankar Narayan, Franziska Putz, Daniella Raz, Brian Robick, Aaron Tam, Abiel Woldu, and Meg Young. 2020. Algorithmic Equity Toolkit. https://www.acluwa.org/AEKitGoogle Scholar
- Solon Barocas and Andrew D. Selbst. 2016. Big Data's Disparate Impact. California Law Review 104 (2016), 671. https://heinonline.org/HOL/Page?handle=hein.journals/calr104&id=695&div=&collection=Google Scholar
- Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, and Yunfeng Zhang. 2018. AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias. arXiv:1810.01943 [cs] (Oct. 2018). http://arxiv.org/abs/1810.01943Google Scholar
- Cynthia L. Bennett and Os Keyes. 2020. What Is the Point of Fairness?: Disability, AI and the Complexity of Justice. ACM SIGACCESS Accessibility and Computing 125 (March 2020), 1--1. https://doi.org/10.1145/3386296.3386301Google ScholarDigital Library
- Jason R Bent. 2020. Is Algorithmic Affirmative Action Legal? Georgetown Law Journal 108 (2020), 803.Google Scholar
- Sebastian Benthall and Bruce D. Haynes. 2019. Racial Categories in Machine Learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* '19. ACM Press, Atlanta, GA, USA, 289--298. https://doi.org/10.1145/3287560.3287575Google ScholarDigital Library
- Miranda Bogen, Aaron Rieke, and Shazeda Ahmed. 2020. Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 492--500.Google ScholarDigital Library
- Consumer Financial Protection Bureau. 2014. Using publicly available information to proxy for unidentified race and ethnicity. (2014). https://www.consumerfinance.gov/data-research/research-reports/using-publicly-available-information-to-proxy-for-unidentified-race-and-ethnicity/Google Scholar
- Brandee Butler. 2020. For the EU to Effectively Address Racial Injustice, We Need Data. Al Jazeera.Google Scholar
- Marika Cifor, Patricia Garcia, TL Cowan, Jasmine Rault, Tonia Sutherland, Anita Say Chan, Jennifer Rode, Anna Lauren Hoffmann, Niloufar Salehi, and Lisa Nakamura. 2019. Feminist data manifest-no. https://www.manifestno.com/Google Scholar
- US Equal Employment Opportunity Commission et al. 1979. Questions and answers to clarify and provide a common interpretation of the uniform guidelines on employee selection procedures.Google Scholar
- Sam Corbett-Davies and Sharad Goel. 2018. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv:1808.00023 [cs] (Aug. 2018). http://arxiv.org/abs/1808.00023Google Scholar
- Kimberlé Crenshaw. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. u. Chi. Legal f. (1989), 139.Google Scholar
- Cara Crotty. 2020. Revised form for self-identification of disability released. https://www.constangy.com/affirmative-action-alert/revised-form-forself-identification-of-disabilityGoogle Scholar
- d4bl. 2020. Data 4 Black Lives. https://d4bl.org/Google Scholar
- Roel Dobbe, Sarah Dean, Thomas Gilbert, and Nitin Kohli. 2018. A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. arXiv:1807.00553 [cs, math, stat] (July 2018). arXiv:1807.00553 [cs, math, stat] http://arxiv.org/abs/1807.00553Google Scholar
- Alexander D'Amour, Hansa Srinivasan, James Atwood, Pallavi Baljekar, D. Sculley, and Yoni Halpern. 2020. Fairness is not static: Deeper understanding of long term fairness via simulation studies. In Proceedings of the 2020 conference on fairness, accountability, and transparency (FAccT '20). Association for Computing Machinery, New York, NY, USA, 525--534. https://doi.org/10.1145/3351095.3372878Google ScholarDigital Library
- Marc N Elliott, Peter A Morrison, Allen Fremont, Daniel F McCaffrey, Philip Pantoja, and Nicole Lurie. 2009. Using the Census Bureau's surname list to improve estimates of race/ethnicity and associated disparities. Health Services and Outcomes Research Methodology 9, 2 (2009), 69.Google ScholarCross Ref
- European Parliament and Council of European Union. 2016. Regulation (EU) 2016/679 (General Data Protection Regulation). https://eur-lex.europa.eu/legalcontent/EN/TXT/HTML/?uri=CELEX:32016R0679&from=ENGoogle Scholar
- German Bundestag 2017. Federal Data Protection Act of 30 June 2017 (BDSG)., 2097 pages. https://www.gesetze-im-internet.de/englisch_bdsg/englisch_bdsg.htmlGoogle Scholar
- Soheil Ghili, Ehsan Kazemi, and Amin Karbasi. 2019. Eliminating latent discrimination: Train then mask. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 3672--3680.Google Scholar
- Bryce W Goodman. 2016. A step towards accountable algorithms? algorithmic discrimination and the european union general data protection. In 29th conference on Neural Information Processing Systems (NIPS 2016), Barcelona. NIPS foundation.Google Scholar
- Ben Green and Salomé Viljoen. 2020. Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 19--31. https://doi.org/10.1145/3351095.3372840Google ScholarDigital Library
- Maya Gupta, Andrew Cotter, Mahdi Milani Fard, and Serena Wang. 2018. Proxy Fairness. arXiv:1806.11212 [cs, stat] (June 2018). arXiv:1806.11212 [cs, stat] http://arxiv.org/abs/1806.11212Google Scholar
- Sara Hajian, Josep Domingo-Ferrer, Anna Monreale, Dino Pedreschi, and Fosca Giannotti. 2015. Discrimination- and Privacy-Aware Patterns. Data Mining and Knowledge Discovery 29, 6 (Nov. 2015), 1733--1782. https://doi.org/10.1007/s10618-014-0393-7Google ScholarDigital Library
- Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham. 2018. Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM Press, Montreal QC, Canada, 1--13. https://doi.org/10.1145/3173574.3173582Google ScholarDigital Library
- Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a critical race methodology in algorithmic fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 501--512.Google ScholarDigital Library
- Donna Haraway. 1988. Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. Feminist Studies 14, 3 (1988), 575--599. https://doi.org/10.2307/3178066Google ScholarCross Ref
- Zach Harned and Hanna Wallach. 2019. Stretching human laws to apply to machines: The dangers of a 'Colorblind' Computer. Florida State University Law Review, Forthcoming (2019).Google Scholar
- Tatsunori B. Hashimoto, Megha Srivastava, Hongseok Namkoong, and Percy Liang. 2018. Fairness Without Demographics in Repeated Loss Minimization. arXiv:1806.08010 [cs, stat] (July 2018). arXiv:1806.08010 [cs, stat] http://arxiv.org/abs/1806.08010Google Scholar
- Anna Lauren Hoffmann. 2019. Where Fairness Fails: Data, Algorithms, and the Limits of Antidiscrimination Discourse. Information, Communication & Society 22, 7 (June 2019), 900--915. https://doi.org/10.1080/1369118X.2019.1573912Google ScholarCross Ref
- Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI '19 (2019), 1--16. https://doi.org/10.1145/3290605.3300830 arXiv:1812.05239Google ScholarDigital Library
- Junia Howell and Michael O. Emerson. 2017. So What "Should" We Use? Evaluating the Impact of Five Racial Measures on Markers of Social Inequality. Sociology of Race and Ethnicity 3, 1 (Jan. 2017), 14--30. https://doi.org/10.1177/2332649216648465Google Scholar
- Lily Hu and Issa Kohler-Hausmann. 2020. What's sex got to do with machine learning?. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 513--513.Google ScholarDigital Library
- Institute of Medicine (US) Committee on Quality of Health Care in America. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century. National Academies Press (US), Washington (DC). http://www.ncbi.nlm.nih.gov/books/NBK222274/Google Scholar
- Abigail Z. Jacobs and Hanna Wallach. 2019. Measurement and Fairness. arXiv:1912.05511 [cs] (Dec. 2019). arXiv:1912.05511 [cs] http://arxiv.org/abs/1912.05511Google Scholar
- Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, and Jonathan Ullman. 2019. Differentially private fair learning. In International Conference on Machine Learning. PMLR, 3000--3008.Google Scholar
- LLana James. 2020. Race-Based COVID-19 Data May Be Used to Discriminate against Racialized Communities. http://theconversation.com/race-based-covid-19-data-may-be-used-to-discriminate-against-racialized-communities-138372Google Scholar
- Peter Kairouz and Others. 2019. Advances and Open Problems in Federated Learning. arXiv:1912.04977 [cs, stat] (Dec. 2019). arXiv:1912.04977 [cs, stat] http://arxiv.org/abs/1912.04977Google Scholar
- Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. 2011. Fairness-aware Learning through Regularization Approach. In 2011 IEEE 11th International Conference on Data Mining Workshops. 643--650. https://doi.org/10.1109/ICDMW.2011.83Google ScholarDigital Library
- Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Binz, Daniella Raz, and P. M. Krafft. 2020. Toward Situated Interventions for Algorithmic Equity: Lessons from the Field. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ACM, Barcelona Spain, 45--55. https://doi.org/10.1145/3351095.3372874Google ScholarDigital Library
- Niki Kilbertus, Adrià Gascón, Matt J. Kusner, Michael Veale, Krishna P. Gummadi, and Adrian Weller. 2018. Blind Justice: Fairness with Encrypted Sensitive Attributes. arXiv:1806.03281 [cs, stat] (June 2018). arXiv:1806.03281 [cs, stat] http://arxiv.org/abs/1806.03281Google Scholar
- Satya Kuppam, Ryan Mckenna, David Pujol, Michael Hay, Ashwin Machanavajjhala, and Gerome Miklau. 2020. Fair Decision Making Using Privacy-Protected Data. arXiv:1905.12744 [cs] (Jan. 2020). arXiv:1905.12744 [cs] http://arxiv.org/abs/1905.12744Google Scholar
- Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, and Ed H. Chi. 2020. Fairness without Demographics through Adversarially Reweighted Learning. arXiv:2006.13114 [cs, stat] (June 2020). arXiv:2006.13114 [cs, stat] http://arxiv.org/abs/2006.13114Google Scholar
- Issie Lapowsky. 2019. How Cambridge Analytica Sparked the Great Privacy Awakening. Wired (March 2019). https://www.wired.com/story/cambridge-analytica-facebook-privacy-awakening/Google Scholar
- LinkedIn. [n.d.]. LinkedIn Recruiter: The Industry-Standard Recruiting Tool. https://business.linkedin.com/talent-solutions/recruiterGoogle Scholar
- Efrider Maramwidze-Merrison. 2016. Innovative Methodologies in Qualitative Research: Social Media Window for Accessing Organisational Elites for Interviews. 12, 2 (2016), 11.Google Scholar
- Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, and William S. Isaac. 2020. Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics. arXiv:2005.07572 [cs, stat] (May 2020). arXiv:2005.07572 [cs, stat] http://arxiv.org/abs/2005.07572Google Scholar
- Microsoft. 2020. Fairlearn. https://github.com/fairlearn/fairlearnGoogle Scholar
- Stefania Milan and Emiliano Treré. 2019. Big Data from the South(s): Beyond Data Universalism. Television & New Media 20, 4 (May 2019), 319--335. https://doi.org/10.1177/1527476419837739Google ScholarCross Ref
- Deirdre K. Mulligan, Joshua A. Kroll, Nitin Kohli, and Richmond Y. Wong. 2019. This Thing Called Fairness: Disciplinary Confusion Realizing a Value in Technology. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 1--36. https://doi.org/10.1145/3359221 arXiv:1909.11869Google ScholarDigital Library
- Mimi Onuoha. 2020. When Proof Is Not Enough. https://fivethirtyeight.com/features/when-proof-is-not-enough/Google Scholar
- Tawana Petty, Mariella Saba, Tamika Lewis, Seeta Peña Gangadharan, and Virginia Eubanks. 2018. Our Data Bodies: Reclaiming Our Data. June 15 (2018), 37.Google Scholar
- Stephanie Carroll Rainie, Tahu Kukutai, Maggie Walter, Oscar Luis Figueroa-Rodríguez, Jennifer Walker, and Per Axelsson. 2019. Indigenous data sovereignty. The State of Open Data: Histories and Horizons (2019), 300.Google Scholar
- Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2020. Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. arXiv:2006.12358 [cs] (July 2020). arXiv:2006.12358 [cs] http://arxiv.org/abs/2006.12358Google Scholar
- Nani Jansen Reventlow. [n.d.]. Data collection is not the solution for Europe's racism problem. https://www.aljazeera.com/opinions/2020/7/29/data-collection-is-not-the-solution-for-europes-racism-problem/?gb=trueGoogle Scholar
- Alexey Romanov, Maria De-Arteaga, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, Anna Rumshisky, and Adam Tauman Kalai. 2019. What's in a Name? Reducing Bias in Bios without Access to Protected Attributes. arXiv:1904.05233 [cs, stat] (April 2019). arXiv:1904.05233 [cs, stat] http://arxiv.org/abs/1904.05233Google Scholar
- Pedro Saleiro, Benedict Kuester, Loren Hinkson, Jesse London, Abby Stevens, Ari Anisfeld, Kit T. Rodolfa, and Rayid Ghani. 2019. Aequitas: A Bias and Fairness Audit Toolkit. arXiv:1811.05577 [cs] (April 2019). http://arxiv.org/abs/1811.05577Google Scholar
- P. Sattigeri, S. C. Hoffman, V. Chenthamarakshan, and K. R. Varshney. 2019. Fairness GAN: Generating Datasets with Fairness Properties Using a Generative Adversarial Network. IBM Journal of Research and Development 63, 4/5 (July 2019), 3:1-3:9. https://doi.org/10.1147/JRD.2019.2945519Google ScholarCross Ref
- Morgan Klaus Scheuerman, Kandrea Wade, Caitlin Lustig, and Jed R. Brubaker. 2020. How We've Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis. Proceedings of the ACM on Human-Computer Interaction 4, CSCW1 (May 2020), 1--35. https://doi.org/10.1145/3392866Google ScholarDigital Library
- Andrew D. Selbst, Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency -FAT* '19. ACM Press, Atlanta, GA, USA, 59--68. https://doi.org/10.1145/3287560.3287598Google ScholarDigital Library
- Suranga Seneviratne. 2019. The Ugly Truth: Tech Companies Are Tracking and Misusing Our Data, and There's Little We Can Do. http://theconversation.com/the-ugly-truth-tech-companies-are-tracking-and-misusing-our-data-and-theres-little-we-can-do-127444Google Scholar
- Sachil Singh. 2020. Collecting race-based data during pandemic may fuel dangerous prejudices. https://www.queensu.ca/gazette/stories/collecting-race-based-data-during-pandemic-may-fuel-dangerous-prejudicesGoogle Scholar
- Kacper Sokol, Alexander Hepburn, Rafael Poyiadzi, Matthew Clifford, Raul Santos-Rodriguez, and Peter Flach. 2020. FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems. Journal of Open Source Software 5, 49 (2020), 1904. https://doi.org/10.21105/joss.01904Google ScholarCross Ref
- Linnet Taylor. 2017. What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally. Big Data & Society 4, 2 (Dec. 2017), 205395171773633. https://doi.org/10.1177/2053951717736335Google ScholarCross Ref
- Alexander Tischbirek. 2020. Artificial intelligence and discrimination: Discriminating against discriminatory systems. In Regulating artificial intelligence. Springer, 103--121.Google Scholar
- UK Information Commissioner's Office. 2020. What do we need to do to ensure lawfulness, fairness, and transparency in AI systems? https://ico.org.uk/for-organisations/guide-to-data-protection/key-data-protection- themes/guidance-on-ai-and-data-protection/what-do-we-need-to-do-to-ensure-lawfulness-fairness-and-transparency-in-ai-systems/Google Scholar
- Sriram Vasudevan and Krishnaram Kenthapadi. 2020. LiFT: A Scalable Framework for Measuring Fairness in ML Applications. arXiv:2008.07433 [cs] (Aug. 2020). https://doi.org/10.1145/3340531.3412705Google ScholarDigital Library
- Michael Veale and Reuben Binns. 2017. Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data. Big Data & Society 4, 2 (Dec. 2017), 205395171774353. https://doi.org/10.1177/2053951717743530Google ScholarCross Ref
- Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI '18. ACM Press, Montreal QC, Canada, 1--14. https://doi.org/10.1145/3173574.3174014Google ScholarDigital Library
- Salome Viljoen. 2020. Democratic Data: A Relational Theory For Data Governance. SSRN Scholarly Paper ID 3727562. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3727562Google Scholar
- James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, and Jimbo Wilson. 2020. The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics 26, 1 (Jan. 2020), 56--65. https://doi.org/10.1109/TVCG.2019.2934619Google Scholar
- Williams, Brooks, and Shmargad. 2018. How Algorithms Discriminate Based on Data They Lack: Challenges, Solutions, and Policy Implications. Journal of Information Policy 8 (2018), 78. https://doi.org/10.5325/jinfopoli.8.2018.0078Google ScholarCross Ref
- Orice M Williams. 2008. Fair Lending: Race and Gender Data are Limited for Non-Mortgage Lending. Subcommittee on Oversight and Investigations, Committee on Financial Services, House of Representatives (2008). arXiv:GAO-08-1023TGoogle Scholar
- Pak-Hang Wong. 2020. Democratizing Algorithmic Fairness. Philosophy & Technology 33, 2 (June 2020), 225--244. https://doi.org/10.1007/s13347-019-00355-wGoogle ScholarCross Ref
- Alice Xiang. 2021. Reconciling legal and technical approaches to algorithmic bias. Tennessee Law Review 88, 3 (2021).Google Scholar
- Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P. Gummadi. 2017. Fairness Constraints: Mechanisms for Fair Classification. In Artificial Intelligence and Statistics. PMLR, 962--970. http://proceedings.mlr.press/v54/zafar17a.htmlGoogle Scholar
- Tal Z Zarsky. 2014. Understanding discrimination in the scored society. Washington Law Review 89 (2014), 1375.Google Scholar
- Yan Zhang. 2016. Assessing Fair Lending Risks Using Race/Ethnicity Proxies. Management Science 64, 1 (Nov. 2016), 178--197. https://doi.org/10.1287/mnsc.2016.2579Google ScholarDigital Library
- Indrė Žliobaitė and Bart Custers. 2016. Using Sensitive Personal Data May Be Necessary for Avoiding Discrimination in Data-Driven Decision Models. Artificial Intelligence and Law 24, 2 (June 2016), 183--201. https://doi.org/10.1007/s10506-016-9182-5Google ScholarDigital Library
Index Terms
- What We Can't Measure, We Can't Understand: Challenges to Demographic Data Procurement in the Pursuit of Fairness
Recommendations
Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of Demographic Data Collection in the Pursuit of Fairness
FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and TransparencyMost proposed algorithmic fairness techniques require access to demographic data in order to make performance comparisons and standardizations across groups, however this data is largely unavailable in practice, hindering the widespread adoption of ...
When to Collect Sensitive Category Data? Public Sector Considerations For Balancing Privacy and Freedom from Discrimination in Automated Decision Systems
CSCW'22 Companion: Companion Publication of the 2022 Conference on Computer Supported Cooperative Work and Social ComputingAutomated Decision Systems (ADS) are being used to inform important decisions in government services. Concerns regarding discrimination in ADS have led to the rise of bias mitigation techniques, or data science practices that measure and adjust for ...
Achieving data privacy for decision support systems in times of massive data sharing
AbstractThe world is suffering from a new pandemic of Covid-19 that is affecting human lives. The collection of records for Covid-19 patients is necessary to tackle that situation. The decision support systems (DSS) are used to gather that records. The ...
Comments