24.02.2020 | Original Article | Ausgabe 9/2020

Unsupervised image-to-image translation using intra-domain reconstruction loss
Wichtige Hinweise
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Abstract
Generative adversarial networks (GANs) have been successfully used for considerable computer vision tasks, especially the image-to-image translation. However, GANs are often accompanied by training instability and mode collapse in the process of image-to-image translation, which leads to the generation of low-quality images. To address the aforementioned problem, by combining CycleGAN and intra-domain reconstruction loss (IDRL), we propose an unsupervised image-to-image translation network named “Cycle-IDRL”. Specifically, the generator adopts the U-Net network with skip connections, which merges the coarse-grained and fine-grained features and the least squares loss in LSGAN is used to improve the stability of training process. Especially, the target domain features extracted from the discriminator are used as input of generator to generate reconstructed samples. Then, we construct the IDRL between the target domain samples and the reconstructed samples by using L1 norm. The experimental results on multiple datasets show that the proposed method performs better than the existing methods.