Skip to main content
Top

2022 | OriginalPaper | Chapter

A Deep Multi-task Generative Adversarial Network for Face Completion

Authors : Qiang Wang, Huijie Fan, Yandong Tang

Published in: Intelligent Robotics and Applications

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Face completion is a challenging task that requires a known mask as prior information to restore the missing content of a corrupted image. In contrast to well-studied face completion methods, we present a Deep Multi-task Generative Adversarial Network (DMGAN) for simultaneous missing region detection and completion in face imagery tasks. Specifically, our model first learns rich hierarchical representations, which are critical for missing region detection and completion, automatically. With these hierarchical representations, we then design two complementary sub-networks: (1) DetectionNet, which is built upon a fully convolutional neural net and detects the location and geometry information of the missing region in a coarse-to-fine manner, and (2) CompletionNet, which is designed with a skip connection architecture and predicts the missing region with multi-scale and multi-level features. Additionally, we train two context discriminators to ensure the consistency of the generated image. In contrast to existing models, our model can generate realistic face completion results without any prior information about the missing region, which allows our model to produce missing regions with arbitrary shapes and locations. Extensive quantitative and qualitative experiments on benchmark datasets demonstrate that the proposed model generates higher quality results compared to state-of-the-art methods.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Barnes, C., Shechtman, E., Finkelstein, A., Dan, B.G.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 1–11 (2009)CrossRef Barnes, C., Shechtman, E., Finkelstein, A., Dan, B.G.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3), 1–11 (2009)CrossRef
2.
go back to reference Bertalmio, M., Vese, L., Sapiro, G., Osher, S.: Simultaneous structure and texture image inpainting. IEEE Trans. Image Process. 12(8), 882–9 (2003)CrossRef Bertalmio, M., Vese, L., Sapiro, G., Osher, S.: Simultaneous structure and texture image inpainting. IEEE Trans. Image Process. 12(8), 882–9 (2003)CrossRef
4.
go back to reference Dosovitskiy, A., Springenberg, J.T., Brox, T.: Learning to generate chairs with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1538–1546 (2015) Dosovitskiy, A., Springenberg, J.T., Brox, T.: Learning to generate chairs with convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1538–1546 (2015)
5.
go back to reference Gatys, L.A., Ecker, A.S., Bethge, M.: Texture synthesis using convolutional neural networks. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 262–270 (2015) Gatys, L.A., Ecker, A.S., Bethge, M.: Texture synthesis using convolutional neural networks. In: Proceedings of the International Conference on Neural Information Processing Systems, pp. 262–270 (2015)
6.
go back to reference Gregor, K., Danihelka, I., Graves, A., Rezende, D.J., Wierstra, D.: DRAW: a recurrent neural network for image generation. Computer Science, pp. 1462–1471 (2015) Gregor, K., Danihelka, I., Graves, A., Rezende, D.J., Wierstra, D.: DRAW: a recurrent neural network for image generation. Computer Science, pp. 1462–1471 (2015)
7.
go back to reference Hays, J., Efros, A.A.: Scene completion using millions of photographs. In: ACM SIGGRAPH, p. 4 (2007) Hays, J., Efros, A.A.: Scene completion using millions of photographs. In: ACM SIGGRAPH, p. 4 (2007)
8.
go back to reference Hou, Q., Cheng, M.M., Hu, X., Borji, A., Tu, Z., Torr, P.: Deeply supervised salient object detection with short connections. IEEE Trans. Pattern Anal. Mach. Intell. (99), 3203–3212 (2016) Hou, Q., Cheng, M.M., Hu, X., Borji, A., Tu, Z., Torr, P.: Deeply supervised salient object detection with short connections. IEEE Trans. Pattern Anal. Mach. Intell. (99), 3203–3212 (2016)
9.
go back to reference Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report 07–49, University of Massachusetts, Amherst, October 2007 Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report 07–49, University of Massachusetts, Amherst, October 2007
10.
go back to reference Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. (ToG) 36(4), 107 (2017)CrossRef Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. (ToG) 36(4), 107 (2017)CrossRef
11.
go back to reference Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 624–632 (2017) Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep Laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 624–632 (2017)
12.
go back to reference Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919 (2017) Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3911–3919 (2017)
14.
go back to reference Liu, Y., Caselles, V.: Exemplar-based image inpainting using multiscale graph cuts. IEEE Trans. Image Process. 22(5), 1699–1711 (2013)MathSciNetCrossRef Liu, Y., Caselles, V.: Exemplar-based image inpainting using multiscale graph cuts. IEEE Trans. Image Process. 22(5), 1699–1711 (2013)MathSciNetCrossRef
15.
go back to reference Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738 (2015) Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3730–3738 (2015)
16.
go back to reference Lucas, A., Lopez-Tapiad, S., Molinae, R., Katsaggelos, A.K.: Generative adversarial networks and perceptual losses for video super-resolution. IEEE Trans. Image Process. (99), 1–1 (2018) Lucas, A., Lopez-Tapiad, S., Molinae, R., Katsaggelos, A.K.: Generative adversarial networks and perceptual losses for video super-resolution. IEEE Trans. Image Process. (99), 1–1 (2018)
17.
go back to reference Lundbæk, K., Malmros, R., Mogensen, E.F.: Image completion using global optimization. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 442–452 (2006) Lundbæk, K., Malmros, R., Mogensen, E.F.: Image completion using global optimization. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 442–452 (2006)
18.
go back to reference Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1520–1528 (2015) Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1520–1528 (2015)
19.
go back to reference Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016) Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)
20.
go back to reference Wang, C., Xu, C., Wang, C., Tao, D.: Perceptual adversarial networks for image-to-image transformation. IEEE Trans. Image Process. 27(8), 4066–4079 (2018)MathSciNetCrossRef Wang, C., Xu, C., Wang, C., Tao, D.: Perceptual adversarial networks for image-to-image transformation. IEEE Trans. Image Process. 27(8), 4066–4079 (2018)MathSciNetCrossRef
21.
go back to reference Wexler, Y., Shechtman, E., Irani, M.: Space-time video completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 120–127 (2004) Wexler, Y., Shechtman, E., Irani, M.: Space-time video completion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 120–127 (2004)
22.
go back to reference Wilczkowiak, M., Brostow, G.J., Tordoff, B., Cipolla, R.: Hole filling through photomontage. In: Proceedings of the British Machine Vision Conference 2005, Oxford, UK, September, pp. 492–501 (2005) Wilczkowiak, M., Brostow, G.J., Tordoff, B., Cipolla, R.: Hole filling through photomontage. In: Proceedings of the British Machine Vision Conference 2005, Oxford, UK, September, pp. 492–501 (2005)
23.
go back to reference Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19(5), 1153 (2010)MathSciNetCrossRef Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19(5), 1153 (2010)MathSciNetCrossRef
24.
go back to reference Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6721–6729, July 2017 Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6721–6729, July 2017
25.
go back to reference Yeh\(^\ast \), R.A., Chen\(^\ast \), C., Lim, T.Y., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493 (2017). \(^\ast \) equal contribution Yeh\(^\ast \), R.A., Chen\(^\ast \), C., Lim, T.Y., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5485–5493 (2017). \(^\ast \) equal contribution
26.
go back to reference Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514, June 2018 Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5505–5514, June 2018
Metadata
Title
A Deep Multi-task Generative Adversarial Network for Face Completion
Authors
Qiang Wang
Huijie Fan
Yandong Tang
Copyright Year
2022
DOI
https://doi.org/10.1007/978-3-031-13822-5_36

Premium Partner