Skip to main content
Erschienen in: Neural Processing Letters 2/2020

06.12.2019

Image Inpainting: A Review

verfasst von: Omar Elharrouss, Noor Almaadeed, Somaya Al-Maadeed, Younes Akbari

Erschienen in: Neural Processing Letters | Ausgabe 2/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Although image inpainting, or the art of repairing the old and deteriorated images, has been around for many years, it has recently gained even more popularity, because of the recent development in image processing techniques. With the improvement of image processing tools and the flexibility of digital image editing, automatic image inpainting has found important applications in computer vision and has also become an important and challenging topic of research in image processing. This paper reviews the existing image inpainting approaches, that were classified into three subcategories, sequential-based, CNN-based, and GAN-based methods. In addition, for each category, a list of methods for different types of distortion on images are presented. Furthermore, the paper also presents available datasets. Last but not least, we present the results of real evaluations of the three categories of image inpainting methods performed on the used datasets, for different types of image distortion. We also present the evaluations metrics and discuss the performance of these methods in terms of these metrics. This overview can be used as a reference for image inpainting researchers, and it can also facilitate the comparison of the methods as well as the datasets used. The main contribution of this paper is the presentation of the three categories of image inpainting methods along with a list of available datasets that the researchers can use to evaluate their proposed methodology against.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
3.
Zurück zum Zitat Zhang J, Yu J, Tao D (2018) Local deep-feature alignment for unsupervised dimension reduction. IEEE Trans Image Process 27(5):2420–2432MathSciNetMATH Zhang J, Yu J, Tao D (2018) Local deep-feature alignment for unsupervised dimension reduction. IEEE Trans Image Process 27(5):2420–2432MathSciNetMATH
4.
Zurück zum Zitat Yu J, Tao D, Wang M, Rui Y (2014) Learning to rank using user clicks and visual features for image retrieval. IEEE Trans Cybern 45(4):767–779 Yu J, Tao D, Wang M, Rui Y (2014) Learning to rank using user clicks and visual features for image retrieval. IEEE Trans Cybern 45(4):767–779
5.
Zurück zum Zitat Yu J, Yang X, Gao F, Tao D (2016) Deep multimodal distance metric learning using click constraints for image ranking. IEEE Trans Cybern 47(12):4014–4024 Yu J, Yang X, Gao F, Tao D (2016) Deep multimodal distance metric learning using click constraints for image ranking. IEEE Trans Cybern 47(12):4014–4024
6.
Zurück zum Zitat Hong C, Yu J, Wan J, Tao D, Wang M (2015) Multimodal deep autoencoder for human pose recovery. IEEE Trans Image Process 24(12):5659–5670MathSciNetMATH Hong C, Yu J, Wan J, Tao D, Wang M (2015) Multimodal deep autoencoder for human pose recovery. IEEE Trans Image Process 24(12):5659–5670MathSciNetMATH
7.
Zurück zum Zitat Hong C, Yu J, Tao D, Wang M (2014) Image-based three-dimensional human pose recovery by multiview locality-sensitive sparse retrieval. IEEE Trans Ind Electron 62(6):3742–3751 Hong C, Yu J, Tao D, Wang M (2014) Image-based three-dimensional human pose recovery by multiview locality-sensitive sparse retrieval. IEEE Trans Ind Electron 62(6):3742–3751
8.
Zurück zum Zitat Yu J, Zhang B, Kuang Z, Lin D, Fan J (2016) iPrivacy: image privacy protection by identifying sensitive objects via deep multi-task learning. IEEE Trans Inf Forensics Secur 12(5):1005–1016 Yu J, Zhang B, Kuang Z, Lin D, Fan J (2016) iPrivacy: image privacy protection by identifying sensitive objects via deep multi-task learning. IEEE Trans Inf Forensics Secur 12(5):1005–1016
9.
Zurück zum Zitat Hong C, Yu J, Zhang J, Jin X, Lee K-H (2018) Multi-modal face pose estimation with multi-task manifold deep learning. IEEE Trans Ind Inform 15(7):3952–3961 Hong C, Yu J, Zhang J, Jin X, Lee K-H (2018) Multi-modal face pose estimation with multi-task manifold deep learning. IEEE Trans Ind Inform 15(7):3952–3961
10.
Zurück zum Zitat Elharrouss O, Abbad A, Moujahid D, Riffi J, Tairi H (2016) A block-based background model for moving object detection. Electron Lett Comput Vis Image Anal 15(3):0017–31 Elharrouss O, Abbad A, Moujahid D, Riffi J, Tairi H (2016) A block-based background model for moving object detection. Electron Lett Comput Vis Image Anal 15(3):0017–31
11.
Zurück zum Zitat Abbad A, Elharrouss O, Abbad K, Tairi H (2018) Application of meemd in post-processing of dimensionality reduction methods for face recognition. IET Biom 8(1):59–68 Abbad A, Elharrouss O, Abbad K, Tairi H (2018) Application of meemd in post-processing of dimensionality reduction methods for face recognition. IET Biom 8(1):59–68
12.
Zurück zum Zitat Moujahid D, Elharrouss O, Tairi H (2018) Visual object tracking via the local soft cosine similarity. Pattern Recognit Lett 110:79–85 Moujahid D, Elharrouss O, Tairi H (2018) Visual object tracking via the local soft cosine similarity. Pattern Recognit Lett 110:79–85
13.
Zurück zum Zitat Muddala SM, Olsson R, Sjöström M (2016) Spatio-temporal consistent depth-image-based rendering using layered depth image and inpainting. EURASIP J Image Video Process 2016(1):9 Muddala SM, Olsson R, Sjöström M (2016) Spatio-temporal consistent depth-image-based rendering using layered depth image and inpainting. EURASIP J Image Video Process 2016(1):9
14.
Zurück zum Zitat Isogawa M, Mikami D, Iwai D, Kimata H, Sato K (2018) Mask optimization for image inpainting. IEEE Access 6:69728–69741 Isogawa M, Mikami D, Iwai D, Kimata H, Sato K (2018) Mask optimization for image inpainting. IEEE Access 6:69728–69741
15.
Zurück zum Zitat Ružić T, Pižurica A (2014) Context-aware patch-based image inpainting using markov random field modeling. IEEE Trans Image Process 24(1):444–456MathSciNetMATH Ružić T, Pižurica A (2014) Context-aware patch-based image inpainting using markov random field modeling. IEEE Trans Image Process 24(1):444–456MathSciNetMATH
16.
Zurück zum Zitat Jin KH, Ye JC (2015) Annihilating filter-based low-rank hankel matrix approach for image inpainting. IEEE Trans Image Process 24(11):3498–3511MathSciNetMATH Jin KH, Ye JC (2015) Annihilating filter-based low-rank hankel matrix approach for image inpainting. IEEE Trans Image Process 24(11):3498–3511MathSciNetMATH
17.
Zurück zum Zitat Kawai N, Sato T, Yokoya N (2015) Diminished reality based on image inpainting considering background geometry. IEEE Trans Vis Comput Gr 22(3):1236–1247 Kawai N, Sato T, Yokoya N (2015) Diminished reality based on image inpainting considering background geometry. IEEE Trans Vis Comput Gr 22(3):1236–1247
18.
Zurück zum Zitat Guo Q, Gao S, Zhang X, Yin Y, Zhang C (2017) Patch-based image inpainting via two-stage low rank approximation. IEEE Trans Vis Comput Gr 24(6):2023–2036 Guo Q, Gao S, Zhang X, Yin Y, Zhang C (2017) Patch-based image inpainting via two-stage low rank approximation. IEEE Trans Vis Comput Gr 24(6):2023–2036
19.
Zurück zum Zitat Lu H, Liu Q, Zhang M, Wang Y, Deng X (2018) Gradient-based low rank method and its application in image inpainting. Multimed Tools Appl 77(5):5969–5993 Lu H, Liu Q, Zhang M, Wang Y, Deng X (2018) Gradient-based low rank method and its application in image inpainting. Multimed Tools Appl 77(5):5969–5993
20.
Zurück zum Zitat Xue H, Zhang S, Cai D (2017) Depth image inpainting: improving low rank matrix completion with low gradient regularization. IEEE Trans Image Process 26(9):4311–4320MathSciNetMATH Xue H, Zhang S, Cai D (2017) Depth image inpainting: improving low rank matrix completion with low gradient regularization. IEEE Trans Image Process 26(9):4311–4320MathSciNetMATH
21.
Zurück zum Zitat Liu J, Yang S, Fang Y, Guo Z (2018) Structure-guided image inpainting using homography transformation. IEEE Trans Multimed 20(12):3252–3265 Liu J, Yang S, Fang Y, Guo Z (2018) Structure-guided image inpainting using homography transformation. IEEE Trans Multimed 20(12):3252–3265
22.
Zurück zum Zitat Ding D, Ram S, Rodríguez JJ (2018) Image inpainting using nonlocal texture matching and nonlinear filtering. IEEE Trans Image Process 28(4):1705–1719MathSciNet Ding D, Ram S, Rodríguez JJ (2018) Image inpainting using nonlocal texture matching and nonlinear filtering. IEEE Trans Image Process 28(4):1705–1719MathSciNet
23.
Zurück zum Zitat Duan J, Pan Z, Zhang B, Liu W, Tai X-C (2015) Fast algorithm for color texture image inpainting using the non-local CTV model. J Glob Optim 62(4):853–876MathSciNetMATH Duan J, Pan Z, Zhang B, Liu W, Tai X-C (2015) Fast algorithm for color texture image inpainting using the non-local CTV model. J Glob Optim 62(4):853–876MathSciNetMATH
24.
Zurück zum Zitat Fan Q, Zhang L (2018) A novel patch matching algorithm for exemplar-based image inpainting. Multimed Tools Appl 77(9):10807–10821 Fan Q, Zhang L (2018) A novel patch matching algorithm for exemplar-based image inpainting. Multimed Tools Appl 77(9):10807–10821
25.
Zurück zum Zitat Jiang W (2016) Rate-distortion optimized image compression based on image inpainting. Multimed Tools Appl 75(2):919–933 Jiang W (2016) Rate-distortion optimized image compression based on image inpainting. Multimed Tools Appl 75(2):919–933
26.
Zurück zum Zitat Alilou VK, Yaghmaee F (2017) Exemplar-based image inpainting using svd-based approximation matrix and multi-scale analysis. Multimed Tools Appl 76(5):7213–7234 Alilou VK, Yaghmaee F (2017) Exemplar-based image inpainting using svd-based approximation matrix and multi-scale analysis. Multimed Tools Appl 76(5):7213–7234
27.
Zurück zum Zitat Wang W, Jia Y (2017) Damaged region filling and evaluation by symmetrical exemplar-based image inpainting for thangka. EURASIP J Image Video Process 2017(1):38 Wang W, Jia Y (2017) Damaged region filling and evaluation by symmetrical exemplar-based image inpainting for thangka. EURASIP J Image Video Process 2017(1):38
28.
Zurück zum Zitat Wei Y, Liu S (2016) Domain-based structure-aware image inpainting. Signal Image Video Process 10(5):911–919 Wei Y, Liu S (2016) Domain-based structure-aware image inpainting. Signal Image Video Process 10(5):911–919
29.
Zurück zum Zitat Yao F (2018) Damaged region filling by improved criminisi image inpainting algorithm for thangka. Clust Comput 22:1–9 Yao F (2018) Damaged region filling by improved criminisi image inpainting algorithm for thangka. Clust Comput 22:1–9
30.
Zurück zum Zitat Zeng J, Fu X, Leng L, Wang C (2019) Image inpainting algorithm based on saliency map and gray entropy. Arabian J Sci Eng 44(4):3549–3558 Zeng J, Fu X, Leng L, Wang C (2019) Image inpainting algorithm based on saliency map and gray entropy. Arabian J Sci Eng 44(4):3549–3558
31.
Zurück zum Zitat Zhang D, Liang Z, Yang G, Li Q, Li L, Sun X (2018) A robust forgery detection algorithm for object removal by exemplar-based image inpainting. Multimed Tools Appl 77(10):11823–11842 Zhang D, Liang Z, Yang G, Li Q, Li L, Sun X (2018) A robust forgery detection algorithm for object removal by exemplar-based image inpainting. Multimed Tools Appl 77(10):11823–11842
32.
Zurück zum Zitat Wali S, Zhang H, Chang H, Wu C (2019) A new adaptive boosting total generalized variation (TGV) technique for image denoising and inpainting. J Vis Commun Image Represent 59:39–51 Wali S, Zhang H, Chang H, Wu C (2019) A new adaptive boosting total generalized variation (TGV) technique for image denoising and inpainting. J Vis Commun Image Represent 59:39–51
33.
Zurück zum Zitat Zhang Q, Lin J (2012) Exemplar-based image inpainting using color distribution analysis. J Inf Sci Eng 28(4):641–654MathSciNet Zhang Q, Lin J (2012) Exemplar-based image inpainting using color distribution analysis. J Inf Sci Eng 28(4):641–654MathSciNet
34.
Zurück zum Zitat Liu Y, Caselles V (2012) Exemplar-based image inpainting using multiscale graph cuts. IEEE Trans Image process 22(5):1699–1711MathSciNetMATH Liu Y, Caselles V (2012) Exemplar-based image inpainting using multiscale graph cuts. IEEE Trans Image process 22(5):1699–1711MathSciNetMATH
35.
Zurück zum Zitat Qin C, Chang C-C, Chiu Y-P (2013) A novel joint data-hiding and compression scheme based on SMVQ and image inpainting. IEEE Trans Image Process 23(3):969–978MathSciNetMATH Qin C, Chang C-C, Chiu Y-P (2013) A novel joint data-hiding and compression scheme based on SMVQ and image inpainting. IEEE Trans Image Process 23(3):969–978MathSciNetMATH
36.
Zurück zum Zitat Ghorai M, Samanta S, Mandal S, Chanda B (2019) Multiple pyramids based image inpainting using local patch statistics and steering kernel feature. IEEE Trans Image Process 28(11):5495–5509MathSciNetMATH Ghorai M, Samanta S, Mandal S, Chanda B (2019) Multiple pyramids based image inpainting using local patch statistics and steering kernel feature. IEEE Trans Image Process 28(11):5495–5509MathSciNetMATH
37.
Zurück zum Zitat Gao J, Zhu J, Nie K, Xu J An image inpainting method for interleaved 3D stacked image sensor, IEEE Sensors Journal Gao J, Zhu J, Nie K, Xu J An image inpainting method for interleaved 3D stacked image sensor, IEEE Sensors Journal
38.
Zurück zum Zitat Borole RP, Bonde SV (2007) Image restoration using prioritized exemplar inpainting with automatic patch optimization. J Inst Eng India Ser B 98(3):311–319 Borole RP, Bonde SV (2007) Image restoration using prioritized exemplar inpainting with automatic patch optimization. J Inst Eng India Ser B 98(3):311–319
39.
Zurück zum Zitat Hoeltgen L (2017) Understanding image inpainting with the help of the Helmholtz equation. Math Sci 11(1):73–77MathSciNetMATH Hoeltgen L (2017) Understanding image inpainting with the help of the Helmholtz equation. Math Sci 11(1):73–77MathSciNetMATH
40.
Zurück zum Zitat Zhang T, Gelman A, Laronga R (2017) Structure-and texture-based fullbore image reconstruction. Math Geosci 49(2):195–215 Zhang T, Gelman A, Laronga R (2017) Structure-and texture-based fullbore image reconstruction. Math Geosci 49(2):195–215
41.
Zurück zum Zitat Li H, Luo W, Huang J (2017) Localization of diffusion-based inpainting in digital images. IEEE Trans Inf Forensics Secur 12(12):3050–3064 Li H, Luo W, Huang J (2017) Localization of diffusion-based inpainting in digital images. IEEE Trans Inf Forensics Secur 12(12):3050–3064
42.
Zurück zum Zitat Li K, Wei Y, Yang Z, Wei W (2016) Image inpainting algorithm based on TV model and evolutionary algorithm. Soft Comput 20(3):885–893 Li K, Wei Y, Yang Z, Wei W (2016) Image inpainting algorithm based on TV model and evolutionary algorithm. Soft Comput 20(3):885–893
43.
Zurück zum Zitat Sridevi G, Kumar SS (2019) Image inpainting based on fractional-order nonlinear diffusion for image reconstruction. Circuit Syst Signal Process 38(8):1–16 Sridevi G, Kumar SS (2019) Image inpainting based on fractional-order nonlinear diffusion for image reconstruction. Circuit Syst Signal Process 38(8):1–16
44.
Zurück zum Zitat Jin X, Su Y, Zou L, Wang Y, Jing P, Wang ZJ (2018) Sparsity-based image inpainting detection via canonical correlation analysis with low-rank constraints. IEEE Access 6:49967–49978 Jin X, Su Y, Zou L, Wang Y, Jing P, Wang ZJ (2018) Sparsity-based image inpainting detection via canonical correlation analysis with low-rank constraints. IEEE Access 6:49967–49978
45.
Zurück zum Zitat Mo J, Zhou Y (2018) The research of image inpainting algorithm using self-adaptive group structure and sparse representation. Cluster Comput 22(1):1–9 Mo J, Zhou Y (2018) The research of image inpainting algorithm using self-adaptive group structure and sparse representation. Cluster Comput 22(1):1–9
46.
Zurück zum Zitat Yan Z, Li X, Li M, Zuo W, Shan S (2018) Shift-net: Image inpainting via deep feature rearrangement. In: Proceedings of the European conference on computer vision (ECCV), pp 1–17 Yan Z, Li X, Li M, Zuo W, Shan S (2018) Shift-net: Image inpainting via deep feature rearrangement. In: Proceedings of the European conference on computer vision (ECCV), pp 1–17
47.
Zurück zum Zitat Weerasekera CS, Dharmasiri T, Garg R, Drummond T, Reid I (2018) Just-in-time reconstruction: inpainting sparse maps using single view depth predictors as priors. In: 2018 IEEE international conference on robotics and automation (ICRA), IEEE, pp 1–9 Weerasekera CS, Dharmasiri T, Garg R, Drummond T, Reid I (2018) Just-in-time reconstruction: inpainting sparse maps using single view depth predictors as priors. In: 2018 IEEE international conference on robotics and automation (ICRA), IEEE, pp 1–9
48.
Zurück zum Zitat Zhao J, Chen Z, Zhang L, Jin X (2018) Unsupervised learnable sinogram inpainting network (sin) for limited angle ct reconstruction. arXiv preprint arXiv:1811.03911 Zhao J, Chen Z, Zhang L, Jin X (2018) Unsupervised learnable sinogram inpainting network (sin) for limited angle ct reconstruction. arXiv preprint arXiv:​1811.​03911
49.
Zurück zum Zitat Chang Y-L, Yu Liu Z, Hsu W (2019) VORNet: spatio-temporally consistent video inpainting for object removal. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 0–0 Chang Y-L, Yu Liu Z, Hsu W (2019) VORNet: spatio-temporally consistent video inpainting for object removal. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 0–0
50.
Zurück zum Zitat Cai N, Su Z, Lin Z, Wang H, Yang Z, Ling BW-K (2017) Blind inpainting using the fully convolutional neural network. Vis Comput 33(2):249–261 Cai N, Su Z, Lin Z, Wang H, Yang Z, Ling BW-K (2017) Blind inpainting using the fully convolutional neural network. Vis Comput 33(2):249–261
51.
Zurück zum Zitat Zhu X, Qian Y, Zhao X, Sun B, Sun Y (2018) A deep learning approach to patch-based image inpainting forensics. Signal Process Image Commun 67:90–99 Zhu X, Qian Y, Zhao X, Sun B, Sun Y (2018) A deep learning approach to patch-based image inpainting forensics. Signal Process Image Commun 67:90–99
52.
Zurück zum Zitat Sidorov O, Hardeberg JY (2019) Deep hyperspectral prior: denoising, inpainting, super-resolution. arXiv preprint arXiv:1902.00301 Sidorov O, Hardeberg JY (2019) Deep hyperspectral prior: denoising, inpainting, super-resolution. arXiv preprint arXiv:​1902.​00301
53.
Zurück zum Zitat Zeng Y, Fu J, Chao H, Guo B (2019) Learning pyramid-context encoder network for high-quality image inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1486–1494 Zeng Y, Fu J, Chao H, Guo B (2019) Learning pyramid-context encoder network for high-quality image inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1486–1494
54.
55.
Zurück zum Zitat Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544 Pathak D, Krahenbuhl P, Donahue J, Darrell T, Efros AA (2016) Context encoders: feature learning by inpainting, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2536–2544
56.
Zurück zum Zitat Sasaki K, Iizuka S, Simo-Serra E, Ishikawa H (2017) Joint gap detection and inpainting of line drawings. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5725–5733 Sasaki K, Iizuka S, Simo-Serra E, Ishikawa H (2017) Joint gap detection and inpainting of line drawings. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5725–5733
57.
Zurück zum Zitat Hsu C, Chen F, Wang G (2017) High-resolution image inpainting through multiple deep networks, In: 2017 international conference on vision, image and signal processing (ICVISP), IEEE, pp 76–81 Hsu C, Chen F, Wang G (2017) High-resolution image inpainting through multiple deep networks, In: 2017 international conference on vision, image and signal processing (ICVISP), IEEE, pp 76–81
58.
Zurück zum Zitat Nakamura T, Zhu A, Yanai K, Uchida S (2017) Scene text eraser. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR), IEEE, Vol 1 pp 832–837 Nakamura T, Zhu A, Yanai K, Uchida S (2017) Scene text eraser. In: 2017 14th IAPR international conference on document analysis and recognition (ICDAR), IEEE, Vol 1 pp 832–837
59.
Zurück zum Zitat Xiang P, Wang L, Cheng J, Zhang B, Wu J (2017) A deep network architecture for image inpainting. In: 2017 3rd IEEE international conference on computer and communications (ICCC), IEEE, pp 1851–1856 Xiang P, Wang L, Cheng J, Zhang B, Wu J (2017) A deep network architecture for image inpainting. In: 2017 3rd IEEE international conference on computer and communications (ICCC), IEEE, pp 1851–1856
60.
Zurück zum Zitat Alilou VK, Yaghmaee F (2015) Application of GRNN neural network in non-texture image inpainting and restoration. Pattern Recognit Lett 62:24–31 Alilou VK, Yaghmaee F (2015) Application of GRNN neural network in non-texture image inpainting and restoration. Pattern Recognit Lett 62:24–31
61.
Zurück zum Zitat Liao L, Hu R, Xiao J, Wang Z (2019) Artist-net: decorating the inferred content with unified style for image inpainting. IEEE Access 7:36921–36933 Liao L, Hu R, Xiao J, Wang Z (2019) Artist-net: decorating the inferred content with unified style for image inpainting. IEEE Access 7:36921–36933
62.
Zurück zum Zitat Cai X, Song B (2018) Semantic object removal with convolutional neural network feature-based inpainting approach. Multimed Syst 24(5):597–609 Cai X, Song B (2018) Semantic object removal with convolutional neural network feature-based inpainting approach. Multimed Syst 24(5):597–609
63.
Zurück zum Zitat Hertz A, Fogel S, Hanocka R, Giryes R, Cohen-Or D (2019) Blind visual motif removal from a single image. arXiv preprint arXiv:1904.02756 Hertz A, Fogel S, Hanocka R, Giryes R, Cohen-Or D (2019) Blind visual motif removal from a single image. arXiv preprint arXiv:​1904.​02756
64.
Zurück zum Zitat Zhao Y, Price B, Cohen S, Gurari D (2019) Guided image inpainting: replacing an image region by pulling content from another image. In: 2019 IEEE winter conference on applications of computer vision (WACV), IEEE, pp 1514–1523 Zhao Y, Price B, Cohen S, Gurari D (2019) Guided image inpainting: replacing an image region by pulling content from another image. In: 2019 IEEE winter conference on applications of computer vision (WACV), IEEE, pp 1514–1523
65.
Zurück zum Zitat Su Y-Z, Liu T-J, Liu K-H, Liu H-H, Pei S-C (2019) Image inpainting for random areas using dense context features, In: 2019 IEEE international conference on image processing (ICIP), IEEE, pp 4679–4683 Su Y-Z, Liu T-J, Liu K-H, Liu H-H, Pei S-C (2019) Image inpainting for random areas using dense context features, In: 2019 IEEE international conference on image processing (ICIP), IEEE, pp 4679–4683
66.
Zurück zum Zitat Ma Y, Liu X, Bai S, Wang L, He D, Liu A (2019) Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation, In: Proceedings of the 28th international joint conference on artificial intelligence, AAAI press, pp 3123–3129 Ma Y, Liu X, Bai S, Wang L, He D, Liu A (2019) Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation, In: Proceedings of the 28th international joint conference on artificial intelligence, AAAI press, pp 3123–3129
67.
Zurück zum Zitat Ke J, Deng J, Lu Y (2019) Noise reduction with image inpainting: an application in clinical data diagnosis, In: ACM SIGGRAPH 2019 posters, ACM, p 88 Ke J, Deng J, Lu Y (2019) Noise reduction with image inpainting: an application in clinical data diagnosis, In: ACM SIGGRAPH 2019 posters, ACM, p 88
68.
Zurück zum Zitat Liu S, Guo Z, Chen J, Yu T, Chen Z (2019) Interleaved zooming network for image inpainting, In: 2019 IEEE international conference on multimedia & expo workshops (ICMEW), IEEE, pp 673–678 Liu S, Guo Z, Chen J, Yu T, Chen Z (2019) Interleaved zooming network for image inpainting, In: 2019 IEEE international conference on multimedia & expo workshops (ICMEW), IEEE, pp 673–678
69.
Zurück zum Zitat Guo Z, Chen Z, Yu T, Chen J, Liu S (2019) Progressive image inpainting with full-resolution residual network, In: Proceedings of the 27th ACM international conference on multimedia, ACM, pp 2496–2504 Guo Z, Chen Z, Yu T, Chen J, Liu S (2019) Progressive image inpainting with full-resolution residual network, In: Proceedings of the 27th ACM international conference on multimedia, ACM, pp 2496–2504
70.
Zurück zum Zitat Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets, In: Advances in neural information processing systems, pp 2672–2680 Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets, In: Advances in neural information processing systems, pp 2672–2680
71.
Zurück zum Zitat Chen Y, Hu H (2019) An improved method for semantic image inpainting with gans: progressive inpainting. Neural Process Lett 49(3):1355–1367 Chen Y, Hu H (2019) An improved method for semantic image inpainting with gans: progressive inpainting. Neural Process Lett 49(3):1355–1367
72.
Zurück zum Zitat Li J, Song G, Zhang M (2008) Occluded offline handwritten Chinese character recognition using deep convolutional generative adversarial network and improved GoogLeNet. Neural Comput Appl 1–15 Li J, Song G, Zhang M (2008) Occluded offline handwritten Chinese character recognition using deep convolutional generative adversarial network and improved GoogLeNet. Neural Comput Appl 1–15
73.
Zurück zum Zitat Shin YG, Sagong MC, Yeo YJ, Kim SW, Ko SJ (2019) Pepsi++: fast and lightweight network for image inpainting. arXiv preprint arXiv:1905.09010 Shin YG, Sagong MC, Yeo YJ, Kim SW, Ko SJ (2019) Pepsi++: fast and lightweight network for image inpainting. arXiv preprint arXiv:​1905.​09010
74.
Zurück zum Zitat Wang H, Jiao L, Wu H, Bie R (2019) New inpainting algorithm based on simplified context encoders and multi-scale adversarial network. Procedia Comput Sci 147:254–263 Wang H, Jiao L, Wu H, Bie R (2019) New inpainting algorithm based on simplified context encoders and multi-scale adversarial network. Procedia Comput Sci 147:254–263
75.
Zurück zum Zitat Wang C, Xu C, Wang C, Tao D (2018) Perceptual adversarial networks for image-to-image transformation. IEEE Trans Image Process 27(8):4066–4079MathSciNetMATH Wang C, Xu C, Wang C, Tao D (2018) Perceptual adversarial networks for image-to-image transformation. IEEE Trans Image Process 27(8):4066–4079MathSciNetMATH
76.
Zurück zum Zitat Sagong M-c, Shin Y-g, Kim S-w, Park S, Ko S-j (2019) Pepsi: fast image inpainting with parallel decoding network, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 11360–11368 Sagong M-c, Shin Y-g, Kim S-w, Park S, Ko S-j (2019) Pepsi: fast image inpainting with parallel decoding network, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 11360–11368
77.
Zurück zum Zitat Dhamo H, Tateno K, Laina I, Navab N, Tombari F (2019) Peeking behind objects: layered depth prediction from a single image. Pattern Recognit Lett 125:333–340 Dhamo H, Tateno K, Laina I, Navab N, Tombari F (2019) Peeking behind objects: layered depth prediction from a single image. Pattern Recognit Lett 125:333–340
78.
Zurück zum Zitat Elharrouss O, Al-Maadeed N, Al-Maadeed S (2019) Video summarization based on motion detection for surveillance systems. In: 15th international wireless communications & mobile computing conference (IWCMC). IEEE, pp 366–371 Elharrouss O, Al-Maadeed N, Al-Maadeed S (2019) Video summarization based on motion detection for surveillance systems. In: 15th international wireless communications & mobile computing conference (IWCMC). IEEE, pp 366–371
79.
Zurück zum Zitat Almaadeed N, Elharrouss O, Al-Maadeed S, Bouridane A, Beghdadi A (2019) A novel approach for robust multi human action detection and recognition based on 3-dimentional convolutional neural networks. arXiv preprint arXiv:1907.11272 Almaadeed N, Elharrouss O, Al-Maadeed S, Bouridane A, Beghdadi A (2019) A novel approach for robust multi human action detection and recognition based on 3-dimentional convolutional neural networks. arXiv preprint arXiv:​1907.​11272
80.
Zurück zum Zitat Vitoria P, Sintes J, Ballester C (2018) Semantic image inpainting through improved Wasserstein generative adversarial networks. arXiv preprint arXiv:1812.01071 Vitoria P, Sintes J, Ballester C (2018) Semantic image inpainting through improved Wasserstein generative adversarial networks. arXiv preprint arXiv:​1812.​01071
81.
Zurück zum Zitat Dong J, Yin R, Sun X, Li Q, Yang Y, Qin X (2018) Inpainting of remote sensing sst images with deep convolutional generative adversarial network. IEEE Geosci Remote Sens Lett 16(2):173–177 Dong J, Yin R, Sun X, Li Q, Yang Y, Qin X (2018) Inpainting of remote sensing sst images with deep convolutional generative adversarial network. IEEE Geosci Remote Sens Lett 16(2):173–177
82.
Zurück zum Zitat Lou S, Fan Q, Chen F, Wang C, Li J, (2018) Preliminary investigation on single remote sensing image inpainting through a modified gan, In: 2018 10th IAPR workshop on pattern recognition in remote sensing (PRRS). IEEE 1–6 Lou S, Fan Q, Chen F, Wang C, Li J, (2018) Preliminary investigation on single remote sensing image inpainting through a modified gan, In: 2018 10th IAPR workshop on pattern recognition in remote sensing (PRRS). IEEE 1–6
83.
Zurück zum Zitat Salem N.M, Mahdi HM, Abbas H (2018) Semantic image inpainting vsing self-learning encoder-decoder and adversarial loss, In: 2018 13th international conference on computer engineering and systems (ICCES), IEEE, pp 103–108 Salem N.M, Mahdi HM, Abbas H (2018) Semantic image inpainting vsing self-learning encoder-decoder and adversarial loss, In: 2018 13th international conference on computer engineering and systems (ICCES), IEEE, pp 103–108
84.
Zurück zum Zitat Liu H, Lu G, Bi X, Yan J, Wang W (2018) Image inpainting based on generative adversarial networks, In: 2018 14th international conference on natural computation, fuzzy systems and knowledge discovery (ICNC-FSKD), IEEE, pp 373–378 Liu H, Lu G, Bi X, Yan J, Wang W (2018) Image inpainting based on generative adversarial networks, In: 2018 14th international conference on natural computation, fuzzy systems and knowledge discovery (ICNC-FSKD), IEEE, pp 373–378
85.
86.
Zurück zum Zitat Jiao L, Wu H, Wang H, Bie R (2019) Multi-scale semantic image inpainting with residual learning and GAN. Neurocomputing 331:199–212 Jiao L, Wu H, Wang H, Bie R (2019) Multi-scale semantic image inpainting with residual learning and GAN. Neurocomputing 331:199–212
87.
Zurück zum Zitat Nazeri K, Ng E, Joseph T, Qureshi F, Ebrahimi M (2019) Edgeconnect: generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212 Nazeri K, Ng E, Joseph T, Qureshi F, Ebrahimi M (2019) Edgeconnect: generative image inpainting with adversarial edge learning. arXiv preprint arXiv:​1901.​00212
88.
Zurück zum Zitat Li A, Qi J, Zhang R, Ma X, Ramamohanarao K (2019) Generative image inpainting with submanifold alignment. arXiv preprint arXiv:1908.00211 Li A, Qi J, Zhang R, Ma X, Ramamohanarao K (2019) Generative image inpainting with submanifold alignment. arXiv preprint arXiv:​1908.​00211
89.
Zurück zum Zitat Li A, Qi J, Zhang R, Kotagiri R (2019) Boosted gan with semantically interpretable information for image inpainting, In: 2019 international joint conference on neural networks (IJCNN), IEEE, pp 1–8 Li A, Qi J, Zhang R, Kotagiri R (2019) Boosted gan with semantically interpretable information for image inpainting, In: 2019 international joint conference on neural networks (IJCNN), IEEE, pp 1–8
90.
Zurück zum Zitat Armanious K, Mecky Y, Gatidis S, Yang B (2019) Adversarial inpainting of medical image modalities, In: ICASSP 2019–2019 IEEE international conference on acoustics, speech and signal processing (ICASSP) IEEE, pp 3267–3271 Armanious K, Mecky Y, Gatidis S, Yang B (2019) Adversarial inpainting of medical image modalities, In: ICASSP 2019–2019 IEEE international conference on acoustics, speech and signal processing (ICASSP) IEEE, pp 3267–3271
91.
Zurück zum Zitat Yeh R A, Chen C, Yian Lim T, Schwing AG, Hasegawa-Johnson M, Do MN (2017) Semantic image inpainting with deep generative models, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5485–5493 Yeh R A, Chen C, Yian Lim T, Schwing AG, Hasegawa-Johnson M, Do MN (2017) Semantic image inpainting with deep generative models, In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5485–5493
92.
Zurück zum Zitat Yuan L, Ruan C, Hu H, Chen D (2019) Image inpainting based on patch-GANs. IEEE Access 7:46411–46421 Yuan L, Ruan C, Hu H, Chen D (2019) Image inpainting based on patch-GANs. IEEE Access 7:46411–46421
93.
Zurück zum Zitat Doersch C, Singh S, Gupta A, Sivic J, Efros AA (2015) What makes paris look like paris? Commun ACM 58(12):103–110 Doersch C, Singh S, Gupta A, Sivic J, Efros AA (2015) What makes paris look like paris? Commun ACM 58(12):103–110
94.
Zurück zum Zitat Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: a 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1452–1464 Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A (2017) Places: a 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1452–1464
95.
Zurück zum Zitat Xiong W, Yu J, Lin Z, Yang J, Lu X, Barnes C, Luo J (2019) Foreground-aware image inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5840–5848 Xiong W, Yu J, Lin Z, Yang J, Lu X, Barnes C, Luo J (2019) Foreground-aware image inpainting. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5840–5848
96.
Zurück zum Zitat Martin D, Fowlkes C, Tal D, Malik J, et al (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, Iccv Vancouver Martin D, Fowlkes C, Tal D, Malik J, et al (2001) A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, Iccv Vancouver
97.
Zurück zum Zitat Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Comput vis 115(3):211–252MathSciNet Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Comput vis 115(3):211–252MathSciNet
98.
Zurück zum Zitat Liu Z, Luo P, Wang X, Tang X (2018) Large-scale celebfaces attributes (celeba) dataset, Retrieved August 15 Liu Z, Luo P, Wang X, Tang X (2018) Large-scale celebfaces attributes (celeba) dataset, Retrieved August 15
99.
Zurück zum Zitat Baumgardner M F, Biehl LL, Landgrebe DA (2015) 220 band aviris hyperspectral image data set: June 12, 1992 Indian pine test site 3. Purdue University Research Repository 10: R7RX991C Baumgardner M F, Biehl LL, Landgrebe DA (2015) 220 band aviris hyperspectral image data set: June 12, 1992 Indian pine test site 3. Purdue University Research Repository 10: R7RX991C
100.
Zurück zum Zitat Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision. Springer, Cham, pp 740–755 Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: common objects in context. In: European conference on computer vision. Springer, Cham, pp 740–755
101.
Zurück zum Zitat Karatzas D, Shafait F, Uchida S, Iwamura M, i Bigorda LG, Mestre SR, Mas J, Mota DF, Almazan JA, De Las Heras LP (2013) ICDAR 2013 robust reading competition. In: 2013 12th international conference on document analysis and recognition, IEEE, pp 1484–1493 Karatzas D, Shafait F, Uchida S, Iwamura M, i Bigorda LG, Mestre SR, Mas J, Mota DF, Almazan JA, De Las Heras LP (2013) ICDAR 2013 robust reading competition. In: 2013 12th international conference on document analysis and recognition, IEEE, pp 1484–1493
102.
Zurück zum Zitat McCormac J, Handa A, Leutenegger S, Davison AJ (2016) Scenenet RGB-D: 5m photorealistic images of synthetic indoor trajectories with ground truth. arXiv preprint arXiv:1612.05079 McCormac J, Handa A, Leutenegger S, Davison AJ (2016) Scenenet RGB-D: 5m photorealistic images of synthetic indoor trajectories with ground truth. arXiv preprint arXiv:​1612.​05079
103.
Zurück zum Zitat Krause J, Stark M, Deng J, Fei-Fei L (2013) 3D object representations for fine-grained categorization, In: Proceedings of the IEEE international conference on computer vision workshops, pp 554–561 Krause J, Stark M, Deng J, Fei-Fei L (2013) 3D object representations for fine-grained categorization, In: Proceedings of the IEEE international conference on computer vision workshops, pp 554–561
104.
Zurück zum Zitat Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B (2016) The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3213–3223 Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, Franke U, Roth S, Schiele B (2016) The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3213–3223
105.
Zurück zum Zitat Hirschmuller H, Scharstein D (2007) Evaluation of cost functions for stereo matching, In: 2007 IEEE conference on computer vision and pattern recognition, IEEE, pp 1–8 Hirschmuller H, Scharstein D (2007) Evaluation of cost functions for stereo matching, In: 2007 IEEE conference on computer vision and pattern recognition, IEEE, pp 1–8
106.
Zurück zum Zitat Scharstein D, Hirschmüller H, Kitajima Y, Krathwohl G, Nešić N, Wang X, Westling P (2014) High-resolution stereo datasets with subpixel-accurate ground truth. In: German conference on pattern recognition. Springer, cham, pp 31–42 Scharstein D, Hirschmüller H, Kitajima Y, Krathwohl G, Nešić N, Wang X, Westling P (2014) High-resolution stereo datasets with subpixel-accurate ground truth. In: German conference on pattern recognition. Springer, cham, pp 31–42
107.
Zurück zum Zitat Jboor NH, Belhi A, Al-Ali AK, Bouras A, Jaoua A (2019) Towards an inpainting framework for visual cultural heritage. In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT), IEEE, pp 602–607 Jboor NH, Belhi A, Al-Ali AK, Bouras A, Jaoua A (2019) Towards an inpainting framework for visual cultural heritage. In: 2019 IEEE Jordan international joint conference on electrical engineering and information technology (JEEIT), IEEE, pp 602–607
108.
Zurück zum Zitat Elharrouss O, Abbad A, Moujahid D, Tairi H (2017) Moving object detection zone using a block-based background model. IET Comput Vis 12(1):86–94 Elharrouss O, Abbad A, Moujahid D, Tairi H (2017) Moving object detection zone using a block-based background model. IET Comput Vis 12(1):86–94
Metadaten
Titel
Image Inpainting: A Review
verfasst von
Omar Elharrouss
Noor Almaadeed
Somaya Al-Maadeed
Younes Akbari
Publikationsdatum
06.12.2019
Verlag
Springer US
Erschienen in
Neural Processing Letters / Ausgabe 2/2020
Print ISSN: 1370-4621
Elektronische ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-019-10163-0

Weitere Artikel der Ausgabe 2/2020

Neural Processing Letters 2/2020 Zur Ausgabe