Skip to main content
Top
Published in: Data Mining and Knowledge Discovery 5/2023

23-01-2023

Social norm bias: residual harms of fairness-aware algorithms

Authors: Myra Cheng, Maria De-Arteaga, Lester Mackey, Adam Tauman Kalai

Published in: Data Mining and Knowledge Discovery | Issue 5/2023

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Many modern machine learning algorithms mitigate bias by enforcing fairness constraints across coarsely-defined groups related to a sensitive attribute like gender or race. However, these algorithms seldom account for within-group heterogeneity and biases that may disproportionately affect some members of a group. In this work, we characterize Social Norm Bias (SNoB), a subtle but consequential type of algorithmic discrimination that may be exhibited by machine learning models, even when these systems achieve group fairness objectives. We study this issue through the lens of gender bias in occupation classification. We quantify SNoB by measuring how an algorithm’s predictions are associated with conformity to inferred gender norms. When predicting if an individual belongs to a male-dominated occupation, this framework reveals that “fair” classifiers still favor biographies written in ways that align with inferred masculine norms. We compare SNoB across algorithmic fairness techniques and show that it is frequently a residual bias, and post-processing approaches do not mitigate this type of bias at all.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Footnotes
2
The dataset is publicly available at http://​aka.​ms/​biasbios and licensed under the MIT License.
 
3
Consistent with previous work (De-Arteaga et al. 2019), we used regular expressions to remove the following words from the data: he, she, her, his, him, hers, himself, herself, mr, mrs, ms, ph, dr.
 
4
We compute p values for the two-sided test of zero correlation between \(p_c\) and \(r_c\) using SciPy’s spearmanr function (Virtanen et al. 2020). Values marked with \(^*\) and \(^{**}\) indicate that the p value is \( < 0.05\) and \(< 0.01\) respectively.
 
5
CoCL (Romanov et al. 2019) is modulated by a hyperparameter \(\lambda \) that determines the strength of the fairness constraint. We use \(\lambda = 2\), which Romanov et al. (2019) finds to have the smallest Gap\(^{{\textsc {RMS}}}\)on the occupation classification task.
 
6
We computed the p values using the fdrcorrection method from the statsmodels Python package (Seabold and Perktold 2010).
 
Literature
go back to reference Adi Y, Kermany E, Belinkov Y, Lavi O, Goldberg Y (2017) Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In: 5th international conference on learning representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, OpenReview.net, https://openreview.net/forum?id=BJh6Ztuxl Adi Y, Kermany E, Belinkov Y, Lavi O, Goldberg Y (2017) Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. In: 5th international conference on learning representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, OpenReview.net, https://​openreview.​net/​forum?​id=​BJh6Ztuxl
go back to reference Agarwal A, Beygelzimer A, Dudík M, Langford J, Wallach H (2018) A reductions approach to fair classification. In: International conference on machine learning, PMLR, pp 60–69 Agarwal A, Beygelzimer A, Dudík M, Langford J, Wallach H (2018) A reductions approach to fair classification. In: International conference on machine learning, PMLR, pp 60–69
go back to reference Agius S, Tobler C (2012) Trans and intersex people. Discrimination on the grounds of sex, gender identity and gender expression. Office for Official Publications of the European Union Agius S, Tobler C (2012) Trans and intersex people. Discrimination on the grounds of sex, gender identity and gender expression. Office for Official Publications of the European Union
go back to reference Antoniak M, Mimno D (2021) Bad seeds: evaluating lexical methods for bias measurement. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers), Association for Computational Linguistics, Online, pp 1889–1904, https://doi.org/10.18653/v1/2021.acl-long.148 Antoniak M, Mimno D (2021) Bad seeds: evaluating lexical methods for bias measurement. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers), Association for Computational Linguistics, Online, pp 1889–1904, https://​doi.​org/​10.​18653/​v1/​2021.​acl-long.​148
go back to reference Bartl M, Nissim M, Gatt A (2020) Unmasking contextual stereotypes: Measuring and mitigating bert’s gender bias. In: Proceedings of the second workshop on gender bias in natural language processing, pp 1–16 Bartl M, Nissim M, Gatt A (2020) Unmasking contextual stereotypes: Measuring and mitigating bert’s gender bias. In: Proceedings of the second workshop on gender bias in natural language processing, pp 1–16
go back to reference Bellamy RK, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, Lohia P, Martino J, Mehta S, Mojsilović A et al (2019) Ai fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev 63(4/5):1–4CrossRef Bellamy RK, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, Lohia P, Martino J, Mehta S, Mojsilović A et al (2019) Ai fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J Res Dev 63(4/5):1–4CrossRef
go back to reference Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc: Ser B (Methodol) 57(1):289–300MathSciNetMATH Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc: Ser B (Methodol) 57(1):289–300MathSciNetMATH
go back to reference Bertrand M, Mullainathan S (2004) Are Emily and Greg more employable than Lakisha and Jamal? a field experiment on labor market discrimination. Am Econ Rev 94(4):991–1013CrossRef Bertrand M, Mullainathan S (2004) Are Emily and Greg more employable than Lakisha and Jamal? a field experiment on labor market discrimination. Am Econ Rev 94(4):991–1013CrossRef
go back to reference Blodgett SL, Barocas S, Daumé III H, Wallach H (2020) Language (technology) is power: a critical survey of “bias” in nlp. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 5454–5476 Blodgett SL, Barocas S, Daumé III H, Wallach H (2020) Language (technology) is power: a critical survey of “bias” in nlp. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 5454–5476
go back to reference Blodgett SL, Lopez G, Olteanu A, Sim R, Wallach H (2021) Stereotyping Norwegian salmon: an inventory of pitfalls in fairness benchmark datasets. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers), Association for Computational Linguistics, Online, pp 1004–1015, https://doi.org/10.18653/v1/2021.acl-long.81 Blodgett SL, Lopez G, Olteanu A, Sim R, Wallach H (2021) Stereotyping Norwegian salmon: an inventory of pitfalls in fairness benchmark datasets. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (Volume 1: Long Papers), Association for Computational Linguistics, Online, pp 1004–1015, https://​doi.​org/​10.​18653/​v1/​2021.​acl-long.​81
go back to reference Bogen M, Rieke A (2018) Help wanted: an examination of hiring algorithms, equity, and bias. Upturn, December 7 Bogen M, Rieke A (2018) Help wanted: an examination of hiring algorithms, equity, and bias. Upturn, December 7
go back to reference Bojanowski P, Grave E, Joulin A, Mikolov T (2017) Enriching word vectors with subword information. Trans Assoc Comput Linguist 5:135–146CrossRef Bojanowski P, Grave E, Joulin A, Mikolov T (2017) Enriching word vectors with subword information. Trans Assoc Comput Linguist 5:135–146CrossRef
go back to reference Bolukbasi T, Chang KW, Zou JY, Saligrama V, Kalai AT (2016) Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Adv Neural Inf Process Syst 29:4349–4357 Bolukbasi T, Chang KW, Zou JY, Saligrama V, Kalai AT (2016) Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Adv Neural Inf Process Syst 29:4349–4357
go back to reference Bordia S, Bowman SR (2019) Identifying and reducing gender bias in word-level language models. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: student research workshop, association for computational linguistics, Minneapolis, Minnesota, pp 7–15, https://doi.org/10.18653/v1/N19-3002 Bordia S, Bowman SR (2019) Identifying and reducing gender bias in word-level language models. In: Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: student research workshop, association for computational linguistics, Minneapolis, Minnesota, pp 7–15, https://​doi.​org/​10.​18653/​v1/​N19-3002
go back to reference Buolamwini J, Gebru T (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. In: Friedler SA, Wilson C (eds) Conference on fairness, accountability and transparency, FAT 2018, 23-24 February 2018, New York, NY, USA, PMLR, proceedings of machine learning research, vol 81, pp 77–91, http://proceedings.mlr.press/v81/buolamwini18a.html Buolamwini J, Gebru T (2018) Gender shades: intersectional accuracy disparities in commercial gender classification. In: Friedler SA, Wilson C (eds) Conference on fairness, accountability and transparency, FAT 2018, 23-24 February 2018, New York, NY, USA, PMLR, proceedings of machine learning research, vol 81, pp 77–91, http://​proceedings.​mlr.​press/​v81/​buolamwini18a.​html
go back to reference Butler J (1989) Gender trouble: feminism and the subversion of identity. Routledge, London Butler J (1989) Gender trouble: feminism and the subversion of identity. Routledge, London
go back to reference Caliskan A, Bryson JJ, Narayanan A (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356(6334):183–186CrossRef Caliskan A, Bryson JJ, Narayanan A (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356(6334):183–186CrossRef
go back to reference Calmon FP, Wei D, Vinzamuri B, Ramamurthy KN, Varshney KR (2017) Optimized pre-processing for discrimination prevention. In: Proceedings of the 31st international conference on neural information processing systems, pp 3995–4004 Calmon FP, Wei D, Vinzamuri B, Ramamurthy KN, Varshney KR (2017) Optimized pre-processing for discrimination prevention. In: Proceedings of the 31st international conference on neural information processing systems, pp 3995–4004
go back to reference Ceren A, Tekir S (2021) Gender bias in occupation classification from the new york times obituaries. Dokuz Eylül Üniversitesi Mühendislik Fakültesi Fen ve Mühendislik Dergisi 24(71):425–436 Ceren A, Tekir S (2021) Gender bias in occupation classification from the new york times obituaries. Dokuz Eylül Üniversitesi Mühendislik Fakültesi Fen ve Mühendislik Dergisi 24(71):425–436
go back to reference Crenshaw K (1990) Mapping the margins: intersectionality, identity politics, and violence against women of color. Stan L Rev 43:1241CrossRef Crenshaw K (1990) Mapping the margins: intersectionality, identity politics, and violence against women of color. Stan L Rev 43:1241CrossRef
go back to reference Cryan J, Tang S, Zhang X, Metzger M, Zheng H, Zhao BY (2020) Detecting gender stereotypes: Lexicon versus supervised learning methods. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–11 Cryan J, Tang S, Zhang X, Metzger M, Zheng H, Zhao BY (2020) Detecting gender stereotypes: Lexicon versus supervised learning methods. In: Proceedings of the 2020 CHI conference on human factors in computing systems, pp 1–11
go back to reference De-Arteaga M, Romanov A, Wallach H, Chayes J, Borgs C, Chouldechova A, Geyik S, Kenthapadi K, Kalai AT (2019) Bias in bios: a case study of semantic representation bias in a high-stakes setting. In: Proceedings of the conference on fairness, accountability, and transparency, pp 120–128 De-Arteaga M, Romanov A, Wallach H, Chayes J, Borgs C, Chouldechova A, Geyik S, Kenthapadi K, Kalai AT (2019) Bias in bios: a case study of semantic representation bias in a high-stakes setting. In: Proceedings of the conference on fairness, accountability, and transparency, pp 120–128
go back to reference Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2–7, 2019, Volume 1 (Long and Short Papers), Association for Computational Linguistics, pp 4171–4186, https://doi.org/10.18653/v1/n19-1423 Devlin J, Chang M, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2–7, 2019, Volume 1 (Long and Short Papers), Association for Computational Linguistics, pp 4171–4186, https://​doi.​org/​10.​18653/​v1/​n19-1423
go back to reference Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R (2012) Fairness through awareness. In: Proceedings of the 3rd innovations in theoretical computer science conference, pp 214–226 Dwork C, Hardt M, Pitassi T, Reingold O, Zemel R (2012) Fairness through awareness. In: Proceedings of the 3rd innovations in theoretical computer science conference, pp 214–226
go back to reference Dwork C, Immorlica N, Kalai AT, Leiserson M (2018) Decoupled classifiers for group-fair and efficient machine learning. In: Conference on fairness, accountability and transparency, PMLR, pp 119–133 Dwork C, Immorlica N, Kalai AT, Leiserson M (2018) Decoupled classifiers for group-fair and efficient machine learning. In: Conference on fairness, accountability and transparency, PMLR, pp 119–133
go back to reference Ensmenger N (2015) beards, sandals, and other signs of rugged individualism: masculine culture within the computing professions. Osiris 30(1):38–65CrossRef Ensmenger N (2015) beards, sandals, and other signs of rugged individualism: masculine culture within the computing professions. Osiris 30(1):38–65CrossRef
go back to reference Garg N, Schiebinger L, Jurafsky D, Zou J (2018) Word embeddings quantify 100 years of gender and ethnic stereotypes. Proc Natl Acad Sci 115(16):E3635–E3644CrossRef Garg N, Schiebinger L, Jurafsky D, Zou J (2018) Word embeddings quantify 100 years of gender and ethnic stereotypes. Proc Natl Acad Sci 115(16):E3635–E3644CrossRef
go back to reference Geyik SC, Ambler S, Kenthapadi K (2019) Fairness-aware ranking in search and recommendation systems with application to linkedin talent search. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery and data mining, pp 2221–2231 Geyik SC, Ambler S, Kenthapadi K (2019) Fairness-aware ranking in search and recommendation systems with application to linkedin talent search. In: Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery and data mining, pp 2221–2231
go back to reference Glick JL, Theall K, Andrinopoulos K, Kendall C (2018) For data’s sake: dilemmas in the measurement of gender minorities. Cult Health Sex 20(12):1362–1377CrossRef Glick JL, Theall K, Andrinopoulos K, Kendall C (2018) For data’s sake: dilemmas in the measurement of gender minorities. Cult Health Sex 20(12):1362–1377CrossRef
go back to reference Gonen H, Goldberg Y (2019) Lipstick on a pig: debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Association for Computational Linguistics, pp 609–614, https://doi.org/10.18653/v1/n19-1061 Gonen H, Goldberg Y (2019) Lipstick on a pig: debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Association for Computational Linguistics, pp 609–614, https://​doi.​org/​10.​18653/​v1/​n19-1061
go back to reference Hanna A, Denton E, Smart A, Smith-Loud J (2020) Towards a critical race methodology in algorithmic fairness. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 501–512 Hanna A, Denton E, Smart A, Smith-Loud J (2020) Towards a critical race methodology in algorithmic fairness. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 501–512
go back to reference Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. In: Proceedings of the 30th international conference on neural information processing systems, pp 3323–3331 Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. In: Proceedings of the 30th international conference on neural information processing systems, pp 3323–3331
go back to reference Hébert-Johnson U, Kim M, Reingold O, Rothblum G (2018) Multicalibration: calibration for the (computationally-identifiable) masses. In: International conference on machine learning, PMLR, pp 1939–1948 Hébert-Johnson U, Kim M, Reingold O, Rothblum G (2018) Multicalibration: calibration for the (computationally-identifiable) masses. In: International conference on machine learning, PMLR, pp 1939–1948
go back to reference Heilman ME (2001) Description and prescription: how gender stereotypes prevent women’s ascent up the organizational ladder. J Soc Issues 57(4):657–674CrossRef Heilman ME (2001) Description and prescription: how gender stereotypes prevent women’s ascent up the organizational ladder. J Soc Issues 57(4):657–674CrossRef
go back to reference Heilman ME (2012) Gender stereotypes and workplace bias. Res Organ Behav 32:113–135 Heilman ME (2012) Gender stereotypes and workplace bias. Res Organ Behav 32:113–135
go back to reference Hu L, Kohler-Hausmann I (2020) What’s sex got to do with machine learning? In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 513 Hu L, Kohler-Hausmann I (2020) What’s sex got to do with machine learning? In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 513
go back to reference Johnson SK, Hekman DR, Chan ET (2016) If there’s only one woman in your candidate pool, there’s statistically no chance she’ll be hired. Harv Bus Rev 26(04):1–7 Johnson SK, Hekman DR, Chan ET (2016) If there’s only one woman in your candidate pool, there’s statistically no chance she’ll be hired. Harv Bus Rev 26(04):1–7
go back to reference Kamiran F, Calders T (2012) Data preprocessing techniques for classification without discrimination. Knowl Inf Syst 33(1):1–33CrossRef Kamiran F, Calders T (2012) Data preprocessing techniques for classification without discrimination. Knowl Inf Syst 33(1):1–33CrossRef
go back to reference Kamiran F, Karim A, Zhang X (2012) Decision theory for discrimination-aware classification. In: 2012 IEEE 12th international conference on data mining, IEEE, pp 924–929 Kamiran F, Karim A, Zhang X (2012) Decision theory for discrimination-aware classification. In: 2012 IEEE 12th international conference on data mining, IEEE, pp 924–929
go back to reference Kearns MJ, Neel S, Roth A, Wu ZS (2018) Preventing fairness gerrymandering: auditing and learning for subgroup fairness. In: Dy JG, Krause A (eds) Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10–15, 2018, PMLR, Proceedings of Machine Learning Research, vol 80, pp 2569–2577, http://proceedings.mlr.press/v80/kearns18a.html Kearns MJ, Neel S, Roth A, Wu ZS (2018) Preventing fairness gerrymandering: auditing and learning for subgroup fairness. In: Dy JG, Krause A (eds) Proceedings of the 35th international conference on machine learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10–15, 2018, PMLR, Proceedings of Machine Learning Research, vol 80, pp 2569–2577, http://​proceedings.​mlr.​press/​v80/​kearns18a.​html
go back to reference Keyes O, May C, Carrell A (2021) You keep using that word: ways of thinking about gender in computing research. Proc ACM Human-Comput Interact 5(CSCW1):1–23CrossRef Keyes O, May C, Carrell A (2021) You keep using that word: ways of thinking about gender in computing research. Proc ACM Human-Comput Interact 5(CSCW1):1–23CrossRef
go back to reference Kusner MJ, Loftus J, Russell C, Silva R (2017) Counterfactual fairness. In: Advances in neural information processing systems 30 (NIPS 2017) Kusner MJ, Loftus J, Russell C, Silva R (2017) Counterfactual fairness. In: Advances in neural information processing systems 30 (NIPS 2017)
go back to reference Larson B (2017) Gender as a variable in natural-language processing: ethical considerations. In: Proceedings of the first ACL workshop on ethics in natural language processing, association for computational linguistics, Valencia, Spain, pp 1–11, https://doi.org/10.18653/v1/W17-1601 Larson B (2017) Gender as a variable in natural-language processing: ethical considerations. In: Proceedings of the first ACL workshop on ethics in natural language processing, association for computational linguistics, Valencia, Spain, pp 1–11, https://​doi.​org/​10.​18653/​v1/​W17-1601
go back to reference Lipton Z, McAuley J, Chouldechova A (2018) Does mitigating ML’s impact disparity require treatment disparity? In: Advances in neural information processing systems 31 Lipton Z, McAuley J, Chouldechova A (2018) Does mitigating ML’s impact disparity require treatment disparity? In: Advances in neural information processing systems 31
go back to reference Lohia PK, Ramamurthy KN, Bhide M, Saha D, Varshney KR, Puri R (2019) Bias mitigation post-processing for individual and group fairness. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 2847–2851 Lohia PK, Ramamurthy KN, Bhide M, Saha D, Varshney KR, Puri R (2019) Bias mitigation post-processing for individual and group fairness. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP), IEEE, pp 2847–2851
go back to reference Madera JM, Hebl MR, Martin RC (2009) Gender and letters of recommendation for academia: agentic and communal differences. J Appl Psychol 94(6):1591CrossRef Madera JM, Hebl MR, Martin RC (2009) Gender and letters of recommendation for academia: agentic and communal differences. J Appl Psychol 94(6):1591CrossRef
go back to reference Mangheni M, Tufan H, Nkengla L, Aman B, Boonabaana B (2019) Gender norms, technology access, and women farmers’ vulnerability to climate change in sub-saharan africa. In: Agriculture and ecosystem resilience in Sub Saharan Africa, Springer, pp 715–728 Mangheni M, Tufan H, Nkengla L, Aman B, Boonabaana B (2019) Gender norms, technology access, and women farmers’ vulnerability to climate change in sub-saharan africa. In: Agriculture and ecosystem resilience in Sub Saharan Africa, Springer, pp 715–728
go back to reference Marx C, Calmon F, Ustun B (2020) Predictive multiplicity in classification. In: International conference on machine learning, PMLR, pp 6765–6774 Marx C, Calmon F, Ustun B (2020) Predictive multiplicity in classification. In: International conference on machine learning, PMLR, pp 6765–6774
go back to reference Mikolov T, Grave É, Bojanowski P, Puhrsch C, Joulin A (2018) Advances in pre-training distributed word representations. In: Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018) Mikolov T, Grave É, Bojanowski P, Puhrsch C, Joulin A (2018) Advances in pre-training distributed word representations. In: Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018)
go back to reference Mitchell M, Baker D, Moorosi N, Denton E, Hutchinson B, Hanna A, Gebru T, Morgenstern J (2020) Diversity and inclusion metrics in subset selection. In: Proceedings of the AAAI/ACM conference on AI, ethics, and society, pp 117–123 Mitchell M, Baker D, Moorosi N, Denton E, Hutchinson B, Hanna A, Gebru T, Morgenstern J (2020) Diversity and inclusion metrics in subset selection. In: Proceedings of the AAAI/ACM conference on AI, ethics, and society, pp 117–123
go back to reference Moon R (2014) From gorgeous to grumpy: adjectives, age and gender. Gender Lang 8(1):5–41CrossRef Moon R (2014) From gorgeous to grumpy: adjectives, age and gender. Gender Lang 8(1):5–41CrossRef
go back to reference Nadeem M, Bethke A, Reddy S (2021) Stereoset: measuring stereotypical bias in pretrained language models. In: Zong C, Xia F, Li W, Navigli R (eds) Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1–6, 2021, Association for Computational Linguistics, pp 5356–5371, https://doi.org/10.18653/v1/2021.acl-long.416 Nadeem M, Bethke A, Reddy S (2021) Stereoset: measuring stereotypical bias in pretrained language models. In: Zong C, Xia F, Li W, Navigli R (eds) Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1–6, 2021, Association for Computational Linguistics, pp 5356–5371, https://​doi.​org/​10.​18653/​v1/​2021.​acl-long.​416
go back to reference Nangia N, Vania C, Bhalerao R, Bowman SR (2020) Crows-pairs: A challenge dataset for measuring social biases in masked language models. In: Webber B, Cohn T, He Y, Liu Y (eds) Proceedings of the 2020 conference on empirical methods in natural language processing, EMNLP 2020, Online, November 16–20, 2020, Association for computational linguistics, pp 1953–1967, https://doi.org/10.18653/v1/2020.emnlp-main.154 Nangia N, Vania C, Bhalerao R, Bowman SR (2020) Crows-pairs: A challenge dataset for measuring social biases in masked language models. In: Webber B, Cohn T, He Y, Liu Y (eds) Proceedings of the 2020 conference on empirical methods in natural language processing, EMNLP 2020, Online, November 16–20, 2020, Association for computational linguistics, pp 1953–1967, https://​doi.​org/​10.​18653/​v1/​2020.​emnlp-main.​154
go back to reference Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. NYU Press, New YorkCrossRef Noble SU (2018) Algorithms of oppression: how search engines reinforce racism. NYU Press, New YorkCrossRef
go back to reference Park JH, Shin J, Fung P (2018) Reducing gender bias in abusive language detection. In: Proceedings of the 2018 conference on empirical methods in natural language processing, association for computational linguistics, Brussels, Belgium, pp 2799–2804, https://doi.org/10.18653/v1/D18-1302 Park JH, Shin J, Fung P (2018) Reducing gender bias in abusive language detection. In: Proceedings of the 2018 conference on empirical methods in natural language processing, association for computational linguistics, Brussels, Belgium, pp 2799–2804, https://​doi.​org/​10.​18653/​v1/​D18-1302
go back to reference Peng A, Nushi B, Kıcıman E, Inkpen K, Suri S, Kamar E (2019) What you see is what you get? the impact of representation criteria on human bias in hiring. Proc AAAI Conf Hum Comput Crowdsour 7:125–134 Peng A, Nushi B, Kıcıman E, Inkpen K, Suri S, Kamar E (2019) What you see is what you get? the impact of representation criteria on human bias in hiring. Proc AAAI Conf Hum Comput Crowdsour 7:125–134
go back to reference Peng A, Nushi B, Kiciman E, Inkpen K, Kamar E (2022) Investigations of performance and bias in human-AI teamwork in hiring. In: Proceedings of the 36th AAAI conference on artificial intelligence (AAAI 2022), AAAI Peng A, Nushi B, Kiciman E, Inkpen K, Kamar E (2022) Investigations of performance and bias in human-AI teamwork in hiring. In: Proceedings of the 36th AAAI conference on artificial intelligence (AAAI 2022), AAAI
go back to reference Pleiss G, Raghavan M, Wu F, Kleinberg J, Weinberger KQ (2017) On fairness and calibration. In: Advances in neural information processing systems 30 (NIPS 2017) Pleiss G, Raghavan M, Wu F, Kleinberg J, Weinberger KQ (2017) On fairness and calibration. In: Advances in neural information processing systems 30 (NIPS 2017)
go back to reference Raghavan M, Barocas S, Kleinberg J, Levy K (2020) Mitigating bias in algorithmic hiring: evaluating claims and practices. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 469–481 Raghavan M, Barocas S, Kleinberg J, Levy K (2020) Mitigating bias in algorithmic hiring: evaluating claims and practices. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 469–481
go back to reference Romanov A, De-Arteaga M, Wallach HM, Chayes JT, Borgs C, Chouldechova A, Geyik SC, Kenthapadi K, Rumshisky A, Kalai A (2019) What’s in a name? reducing bias in bios without access to protected attributes. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Association for computational linguistics, pp 4187–4195, https://doi.org/10.18653/v1/n19-1424 Romanov A, De-Arteaga M, Wallach HM, Chayes JT, Borgs C, Chouldechova A, Geyik SC, Kenthapadi K, Rumshisky A, Kalai A (2019) What’s in a name? reducing bias in bios without access to protected attributes. In: Burstein J, Doran C, Solorio T (eds) Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), Association for computational linguistics, pp 4187–4195, https://​doi.​org/​10.​18653/​v1/​n19-1424
go back to reference Rudinger R, May C, Van Durme B (2017) Social bias in elicited natural language inferences. In: Proceedings of the First ACL workshop on ethics in natural language processing, association for computational linguistics, Valencia, Spain, pp 74–79, https://doi.org/10.18653/v1/W17-1609 Rudinger R, May C, Van Durme B (2017) Social bias in elicited natural language inferences. In: Proceedings of the First ACL workshop on ethics in natural language processing, association for computational linguistics, Valencia, Spain, pp 74–79, https://​doi.​org/​10.​18653/​v1/​W17-1609
go back to reference Rudinger R, Naradowsky J, Leonard B, Durme BV (2018) Gender bias in coreference resolution. In: Walker MA, Ji H, Stent A (eds) Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1–6, 2018, Volume 2 (Short Papers), Association for Computational Linguistics, pp 8–14, https://doi.org/10.18653/v1/n18-2002 Rudinger R, Naradowsky J, Leonard B, Durme BV (2018) Gender bias in coreference resolution. In: Walker MA, Ji H, Stent A (eds) Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics: human language technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1–6, 2018, Volume 2 (Short Papers), Association for Computational Linguistics, pp 8–14, https://​doi.​org/​10.​18653/​v1/​n18-2002
go back to reference Russell B (2012) Perceptions of female offenders: How stereotypes and social norms affect criminal justice responses. Springer Science and Business Media, Berlin Russell B (2012) Perceptions of female offenders: How stereotypes and social norms affect criminal justice responses. Springer Science and Business Media, Berlin
go back to reference Sánchez-Monedero J, Dencik L, Edwards L (2020) What does it mean to’solve’the problem of discrimination in hiring? social, technical and legal perspectives from the uk on automated hiring systems. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 458–468 Sánchez-Monedero J, Dencik L, Edwards L (2020) What does it mean to’solve’the problem of discrimination in hiring? social, technical and legal perspectives from the uk on automated hiring systems. In: Proceedings of the 2020 conference on fairness, accountability, and transparency, pp 458–468
go back to reference Seabold S, Perktold J (2010) Statsmodels: econometric and statistical modeling with python. In: 9th Python in science conference Seabold S, Perktold J (2010) Statsmodels: econometric and statistical modeling with python. In: 9th Python in science conference
go back to reference Sen M, Wasow O (2016) Race as a bundle of sticks: designs that estimate effects of seemingly immutable characteristics. Annu Rev Polit Sci 19:499–522CrossRef Sen M, Wasow O (2016) Race as a bundle of sticks: designs that estimate effects of seemingly immutable characteristics. Annu Rev Polit Sci 19:499–522CrossRef
go back to reference Shields SA (2008) Gender: an intersectionality perspective. Sex Roles 59(5):301–311CrossRef Shields SA (2008) Gender: an intersectionality perspective. Sex Roles 59(5):301–311CrossRef
go back to reference Snyder K (2015) The resume gap: are different gender styles contributing to tech’s dismal diversity. Fortune Magazine Snyder K (2015) The resume gap: are different gender styles contributing to tech’s dismal diversity. Fortune Magazine
go back to reference Swinger N, De-Arteaga M, Heffernan IV NT, Leiserson MD, Kalai AT (2019) What are the biases in my word embedding? In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, pp 305–311 Swinger N, De-Arteaga M, Heffernan IV NT, Leiserson MD, Kalai AT (2019) What are the biases in my word embedding? In: Proceedings of the 2019 AAAI/ACM conference on AI, ethics, and society, pp 305–311
go back to reference Tang S, Zhang X, Cryan J, Metzger MJ, Zheng H, Zhao BY (2017) Gender bias in the job market: a longitudinal analysis. Proc ACM Human-Comput Interact 1(CSCW):1–19 Tang S, Zhang X, Cryan J, Metzger MJ, Zheng H, Zhao BY (2017) Gender bias in the job market: a longitudinal analysis. Proc ACM Human-Comput Interact 1(CSCW):1–19
go back to reference Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J et al (2020) Scipy 1.0: fundamental algorithms for scientific computing in python. Nat Methods 17(3):261–272CrossRef Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J et al (2020) Scipy 1.0: fundamental algorithms for scientific computing in python. Nat Methods 17(3):261–272CrossRef
go back to reference Wagner C, Garcia D, Jadidi M, Strohmaier M (2015) It’s a man’s wikipedia? assessing gender inequality in an online encyclopedia. In: Proceedings of the international AAAI conference on web and social media, vol 9 Wagner C, Garcia D, Jadidi M, Strohmaier M (2015) It’s a man’s wikipedia? assessing gender inequality in an online encyclopedia. In: Proceedings of the international AAAI conference on web and social media, vol 9
go back to reference Wang T, Zhao J, Yatskar M, Chang KW, Ordonez V (2019) Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 5310–5319 Wang T, Zhao J, Yatskar M, Chang KW, Ordonez V (2019) Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations. In: Proceedings of the IEEE/CVF international conference on computer vision, pp 5310–5319
go back to reference Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M, Davison J, Shleifer S, von Platen P, Ma C, Jernite Y, Plu J, Xu C, Scao TL, Gugger S, Drame M, Lhoest Q, Rush AM (2020) Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, association for computational linguistics, Online, pp 38–45, https://www.aclweb.org/anthology/2020.emnlp-demos.6 Wolf T, Debut L, Sanh V, Chaumond J, Delangue C, Moi A, Cistac P, Rault T, Louf R, Funtowicz M, Davison J, Shleifer S, von Platen P, Ma C, Jernite Y, Plu J, Xu C, Scao TL, Gugger S, Drame M, Lhoest Q, Rush AM (2020) Transformers: state-of-the-art natural language processing. In: Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, association for computational linguistics, Online, pp 38–45, https://​www.​aclweb.​org/​anthology/​2020.​emnlp-demos.​6
go back to reference Wood W, Eagly AH (2009) Gender identity. Handbook of individual differences in social behavior pp 109–125 Wood W, Eagly AH (2009) Gender identity. Handbook of individual differences in social behavior pp 109–125
go back to reference Zhang BH, Lemoine B, Mitchell M (2018) Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society, pp 335–340 Zhang BH, Lemoine B, Mitchell M (2018) Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM conference on AI, ethics, and society, pp 335–340
go back to reference Zhou X, Sap M, Swayamdipta S, Choi Y, Smith NA (2021) Challenges in automated debiasing for toxic language detection. In: Merlo P, Tiedemann J, Tsarfaty R (eds) Proceedings of the 16th conference of the European chapter of the association for computational linguistics: main volume, EACL 2021, Online, April 19–23, 2021, Association for computational linguistics, pp 3143–3155, https://doi.org/10.18653/v1/2021.eacl-main.274 Zhou X, Sap M, Swayamdipta S, Choi Y, Smith NA (2021) Challenges in automated debiasing for toxic language detection. In: Merlo P, Tiedemann J, Tsarfaty R (eds) Proceedings of the 16th conference of the European chapter of the association for computational linguistics: main volume, EACL 2021, Online, April 19–23, 2021, Association for computational linguistics, pp 3143–3155, https://​doi.​org/​10.​18653/​v1/​2021.​eacl-main.​274
Metadata
Title
Social norm bias: residual harms of fairness-aware algorithms
Authors
Myra Cheng
Maria De-Arteaga
Lester Mackey
Adam Tauman Kalai
Publication date
23-01-2023
Publisher
Springer US
Published in
Data Mining and Knowledge Discovery / Issue 5/2023
Print ISSN: 1384-5810
Electronic ISSN: 1573-756X
DOI
https://doi.org/10.1007/s10618-022-00910-8

Other articles of this Issue 5/2023

Data Mining and Knowledge Discovery 5/2023 Go to the issue

Premium Partner