1 Introduction
-
We introduce a novel style transfer GAN that is able to transfer multiple demographic attributes simultaneously. By conditioning on different sets of target images, our framework is able to generate diverse images for each attribute class.
-
A multi-attribute extension to AdaIN is presented in Sect. 3.2. The proposed fusion framework is able to model the multiplicative interactions between attribute representations by employing a tensor-based mixing structure, resulting into a single conditioning variable.
-
With a series of qualitative and quantitative experiments (Sect. 4), we benchmark our model’s ability to enhance the diversity of a dataset against state-of-the-art baselines (both multi-attribute transfer and age progression methods). To quantify the diversity enhancing capabilities of the models, we turn to the established diversity metrics introduced in Merler et al. (2019).
-
We provide a thorough investigation of how dataset bias affects classification performance, and show how more diverse datasets can be used to train less biased classifiers (Sect. 4.6). We also study the case of bias in gender recognition—within the binary paradigm in which it is currently commonly framed in practice—on two datasets: MORPH and KANFace. The experimental analysis indicates that by augmenting the training sets using our model, we are able to mitigate the classifier biases more effectively than other, state-of-the-art methods.
2 Related Work
2.1 Generative Adversarial Networks and Style Transfer
2.2 Transfer of Demographic Attributes
2.3 Fairness-aware Learning and Face Analysis
3 Methodology
3.1 Notation
3.2 Proposed Framework
3.3 Training Objective
4 Experiments
4.1 Implementation Details
4.2 Datasets
4.3 Baselines
4.4 Qualitative Results
4.4.1 Attribute Transfer
4.4.2 Intra-class Diversity
4.5 Diversity Enhancement
4.6 Mitigating Classifier Bias
Age ShH | Age ShE | Age SiD | Age SiE | Gender ShH | Gender ShE | Gender SiD | Gender SiE | Skin Tone ShH | Skin Tone ShE | Skin Tone SiD | Skin Tone SiE | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
GT | – | – | – | – | 0.36 | 0.52 | 1.26 | 0.63 | 0.48 | 0.70 | 1.44 | 0.72 |
IPCGAN | 1.07 | 0.66 | 2.27 | 0.45 | – | – | – | – | – | – | – | – |
CAAE | 0.91 | 0.57 | 2.11 | 0.42 | – | – | – | – | – | – | – | – |
StarGAN | 1.38 | 0.86 | 3.67 | 0.73 | 0.690 | 0.996 | 1.980 | 0.994 | 0.69 | 0.99 | 1.99 | 0.99 |
AttGAN | 1.29 | 0.80 | 3.23 | 0.65 | 0.55 | 0.79 | 1.57 | 0.78 | 0.69 | 0.99 | 1.99 | 0.99 |
Ours | 1.42 | 0.88 | 3.70 | 0.74 | 0.692 | 0.999 | 1.998 | 0.999 | 0.69 | 0.99 | 1.99 | 0.99 |
4.6.1 Comparison to Debiasing Methods
Age ShH | Age ShE | Age SiD | Age SiE | gender ShH | gender ShE | gender SiD | gender SiE | |
---|---|---|---|---|---|---|---|---|
GT | – | – | – | – | 0.6928 | 0.9995 | 1.9986 | 0.9993 |
IPCGAN | 1.16 | 0.72 | 2.87 | 0.57 | – | – | – | – |
CAAE | 0.90 | 0.56 | 2.16 | 0.43 | – | – | – | – |
StarGAN | 1.39 | 0.86 | 3.80 | 0.76 | 0.6906 | 0.9963 | 1.9899 | 0.9950 |
AttGAN | 1.12 | 0.69 | 2.68 | 0.54 | 0.6922 | 0.9987 | 1.9963 | 0.9982 |
Ours | 1.39 | 0.86 | 3.89 | 0.78 | 0.6931 | 0.9999 | 1.9997 | 0.9999 |
Age ShH | Age ShE | Age SiD | Age SiE | Gender ShH | Gender ShE | Gender SiD | Gender SiE | Skin Tone ShH | Skin Tone ShE | Skin Tone SiD | Skin Tone SiE | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
GT | – | – | – | – | 0.69 | 0.99 | 1.99 | 0.99 | 0.57 | 0.82 | 1.61 | 0.81 |
IPCGAN | 1.40 | 0.87 | 3.69 | 0.74 | – | – | – | – | – | – | – | – |
CAAE | 1.48 | 0.92 | 4.06 | 0.81 | – | – | – | – | – | – | – | – |
StarGAN | 1.46 | 0.91 | 3.82 | 0.76 | 0.69 | 0.99 | 1.99 | 0.98 | 0.68 | 0.99 | 1.96 | 0.98 |
AttGAN | 1.32 | 0.82 | 3.30 | 0.66 | 0.69 | 0.99 | 1.99 | 0.99 | 0.58 | 0.83 | 1.63 | 0.82 |
Ours | 1.53 | 0.95 | 4.35 | 0.87 | 0.69 | 0.99 | 1.99 | 0.99 | 0.69 | 0.999 | 1.997 | 0.999 |