No abstract available.
Interdisciplinarity, Gender Diversity, and Network Structure Predict the Centrality of AI Organizations
Artificial intelligence (AI) research plays an increasingly important role in society, impacting key aspects of human life. From face recognition algorithms aiding national security in airports, to software that advises judges in criminal cases, and ...
Dynamic Privacy Budget Allocation Improves Data Efficiency of Differentially Private Gradient Descent
Protecting privacy in learning while maintaining the model performance has become increasingly critical in many applications that involve sensitive data. A popular private learning framework is differentially private learning composed of many privatized ...
A Data-driven analysis of the interplay between Criminological theory and predictive policing algorithms
Previous studies have focused on the biases and feedback loops that occur in predictive policing algorithms. These studies show how systemically and institutionally biased data leads to these feedback loops when predictive policing algorithms are ...
#FuckTheAlgorithm: algorithmic imaginaries and political resistance
This paper applies and extends the concept of algorithmic imaginaries in the context of political resistance to sociotechnical injustice. Focusing on the 2020 UK OfQual protests, the role of the ”fuck the algorithm” chant is examined as an imaginary of ...
Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash
Apple recently revealed its deep perceptual hashing system NeuralHash to detect child sexual abuse material (CSAM) on user devices before files are uploaded to its iCloud service. Public criticism quickly arose regarding the protection of user privacy ...
Fairness Indicators for Systematic Assessments of Visual Feature Extractors
Does everyone equally benefit from computer vision systems? Answers to this question become more and more important as computer vision systems are deployed at large scale, and can spark major concerns when they exhibit vast performance discrepancies ...
FAccT-Check on AI regulation: Systematic Evaluation of AI Regulation on the Example of the Legislation on the Use of AI in the Public Sector in the German Federal State of Schleswig-Holstein
In the framework of the current discussions about regulating Artificial Intelligence (AI) and machine learning (ML), the small Federal State of Schleswig-Holstein in Northern Germany hurries ahead and adopts legislation on the Use of AI in the public ...
News from Generative Artificial Intelligence Is Believed Less
Artificial Intelligence (AI) can generate text virtually indistinguishable from text written by humans. A key question, then, is whether people believe news headlines generated by AI as much as news headlines generated by humans. AI is viewed as lacking ...
When learning becomes impossible
We formally analyze an epistemic bias we call interpretive blindness (IB), in which under certain conditions a learner will be incapable of learning. IB is now common in our society, but it is a natural consequence of Bayesian inference and what we ...
Providing Item-side Individual Fairness for Deep Recommender Systems
Recent advent of deep learning techniques have reinforced the development of new recommender systems. Although these systems have been demonstrated as efficient and effective, the issue of item popularity bias in these recommender systems has raised ...
What People Think AI Should Infer From Faces
Faces play an indispensable role in human social life. At present, computer vision artificial intelligence (AI) captures and interprets human faces for a variety of digital applications and services. The ambiguity of facial information has recently led ...
Minimax Demographic Group Fairness in Federated Learning
Federated learning is an increasingly popular paradigm that enables a large number of entities to collaboratively learn better models. In this work, we study minimax group fairness in federated learning scenarios where different participating entities ...
Automating Care: Online Food Delivery Work During the CoVID-19 Crisis in India
On March 23, 2020, the Government of India (GoI) announced one of the strictest nationwide lockdowns in the world to curb the spread of novel SARS-CoV-2, otherwise known as CoVID-19. The country came to a standstill overnight and the service industry, ...
The Values Encoded in Machine Learning Research
Machine learning currently exerts an outsized influence on the world, increasingly affecting institutional practices and impacted communities. It is therefore critical that we question vague conceptions of the field as value-neutral or universally ...
AI Opacity and Explainability in Tort Litigation
A spate of recent accidents and a lawsuit involving Tesla's ‘self-driving’ cars highlights the growing need for meaningful accountability when harms are caused by AI systems. Tort (or civil liability) lawsuits are one important way for victims to redress ...
A Framework for Deprecating Datasets: Standardizing Documentation, Identification, and Communication
Datasets are central to training machine learning (ML) models. The ML community has recently made significant improvements to data stewardship and documentation practices across the model development life cycle. However, the act of deprecating, or ...
Treatment Effect Risk: Bounds and Inference
Since the average treatment effect (ATE) measures the change in social welfare, even if positive, there is a risk of negative effect on, say, some 10% of the population. Assessing such risk is difficult, however, because any one individual treatment ...
Taxonomy of Risks posed by Language Models
- Laura Weidinger,
- Jonathan Uesato,
- Maribeth Rauh,
- Conor Griffin,
- Po-Sen Huang,
- John Mellor,
- Amelia Glaese,
- Myra Cheng,
- Borja Balle,
- Atoosa Kasirzadeh,
- Courtney Biles,
- Sasha Brown,
- Zac Kenton,
- Will Hawkins,
- Tom Stepleton,
- Abeba Birhane,
- Lisa Anne Hendricks,
- Laura Rimell,
- William Isaac,
- Julia Haas,
- Sean Legassick,
- Geoffrey Irving,
- Iason Gabriel
Responsible innovation on large-scale Language Models (LMs) requires foresight into and in-depth understanding of the risks these models may pose. This paper develops a comprehensive taxonomy of ethical and social risks associated with LMs. We identify ...
Bias in Automated Speaker Recognition
Automated speaker recognition uses data processing to identify speakers by their voice. Today, automated speaker recognition is deployed on billions of smart devices and in services such as call centres. Despite their wide-scale deployment and known ...
It’s Just Not That Simple: An Empirical Study of the Accuracy-Explainability Trade-off in Machine Learning for Public Policy
To achieve high accuracy in machine learning (ML) systems, practitioners often use complex “black-box” models that are not easily understood by humans. The opacity of such models has resulted in public concerns about their use in high-stakes contexts ...
South Korean Public Value Coproduction Towards ‘AI for Humanity’: A Synergy of Sociocultural Norms and Multistakeholder Deliberation in Bridging the Design and Implementation of National AI Ethics Guidelines
As emerging technologies such as Big Data, Artificial Intelligence (AI), robotics, and the Internet of Things (IoT) pose fundamental challenges for global and domestic technological governance, the ‘Fourth Industrial Revolution’ (4IR) comes to the fore ...
Equitable Public Bus Network Optimization for Social Good: A Case Study of Singapore
Public bus transport is a major backbone of many cities’ socioeconomic activities. As such, the topic of public bus network optimization has received substantial attention in Geographic Information System (GIS) research. Unfortunately, most of the ...
GetFair: Generalized Fairness Tuning of Classification Models
We present GetFair, a novel framework for tuning fairness of classification models. The fair classification problem deals with training models for a given classification task where data points have sensitive attributes. The goal of fair classification ...
Social Inclusion in Curated Contexts: Insights from Museum Practices
Artificial intelligence literature suggests that minority and fragile communities in society can be negatively impacted by machine learning algorithms due to inherent biases in the design process, which lead to socially exclusive decisions and policies. ...
How Different Groups Prioritize Ethical Values for Responsible AI
Private companies, public sector organizations, and academic groups have outlined ethical values they consider important for responsible artificial intelligence technologies. While their recommendations converge on a set of central values, little is ...
Measuring Representational Harms in Image Captioning
Previous work has largely considered the fairness of image captioning systems through the underspecified lens of “bias.” In contrast, we present a set of techniques for measuring five types of representational harms, as well as the resulting ...
Towards Intersectionality in Machine Learning: Including More Identities, Handling Underrepresentation, and Performing Evaluation
Research in machine learning fairness has historically considered a single binary demographic attribute; however, the reality is of course far more complicated. In this work, we grapple with questions that arise along three stages of the machine ...
An Outcome Test of Discrimination for Ranked Lists
This paper extends Becker [3]’s outcome test of discrimination to settings where a (human or algorithmic) decision-maker produces a ranked list of candidates. Ranked lists are particularly relevant in the context of online platforms that produce search ...
Causal Inference Struggles with Agency on Online Platforms
Online platforms regularly conduct randomized experiments to understand how changes to the platform causally affect various outcomes of interest. However, experimentation on online platforms has been criticized for having, among other issues, a lack of ...
Index Terms
- Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency