Weitere Artikel dieser Ausgabe durch Wischen aufrufen
Communicated by Tae-Kyun Kim, Stefanos Zafeiriou, Ben Glocker and Stefan Leutenegger.
The online version of this article (https://doi.org/10.1007/s11263-018-1132-0) contains supplementary material, which is available to authorized users.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This paper proposes a learning-based quality evaluation framework for inpainted results that does not require any subjectively annotated training data. Image inpainting, which removes and restores unwanted regions in images, is widely acknowledged as a task whose results are quite difficult to evaluate objectively. Thus, existing learning-based image quality assessment (IQA) methods for inpainting require subjectively annotated data for training. However, subjective annotation requires huge cost and subjects’ judgment occasionally differs from person to person in accordance with the judgment criteria. To overcome these difficulties, the proposed framework generates and uses simulated failure results of inpainted images whose subjective qualities are controlled as the training data. We also propose a masking method for generating training data towards fully automated training data generation. These approaches make it possible to successfully estimate better inpainted images, even though the task is quite subjective. To demonstrate the effectiveness of our approach, we test our algorithm with various datasets and show it outperforms existing IQA methods for inpainting.
Supplementary material 1 (mp4 2903 KB)
Abe, T., Okatani, T., & Deguchi, K. (2012). Recognizing surface qualities from natural images based on learning to rank. In International conference on pattern recognition (ICPR) (pp. 3712–3715).
Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., et al. (2004). Interactive digital photomontage. ACM Transactions on Graphics, 23(3), 294–302. CrossRef
Ardis, P. A., & Singhal, A. (2009). Visual salience metrics for image inpainting. In Proceedings of the SPIE (vol. 7257, pp. 72571W–72571W-9).
Bertalmio, M., Vese, L., Sapiro, G., & Osher, S. (2003). Simultaneous structure and texture image inpainting. IEEE Transactions on Image Processing, 12(8), 882–889. CrossRef
Criminisi, A., Perez, P., & Toyama, K. (2004). Region filling and object removal by exemplar-based inpainting. IEEE Transactions on Image Processing, 13(9), 1200–1212. CrossRef
Darabi, S., Shechtman, E., Barnes, C., Goldman, D. B., & Sen, P. (2012). Image melding: Combining inconsistent images using patch-based synthesis. ACM Transactions on Graphics (TOG) (Proceedings of SIGGRAPH 2012), 31(4), 82:1–82:10.
Deng, J., Dong, W., Socher, R., Jia Li, L., Li, K., & Fei-fei, L. (2009). Imagenet: A large-scale hierarchical image database. In The IEEE conference on computer vision and pattern recognition (CVPR) (pp. 248–255).
Frantc, V. A., Voronin, V. V., Marchuk, V. I., Sherstobitov, A. I., Agaian, S., & Egiazarian, K. (2014). Machine learning approach for objective inpainting quality assessment. In Proceedings of the SPIE (vol. 9120, pp. 91200S–91200S-9).
He, K., Gkioxari, G., Dollár, P., & Girshick, R. B. (2017). Mask r-cnn. In IEEE international conference on computer vision (ICCV) (pp. 2980–2988). IEEE Computer Society.
He, K., & Sun, J. (2014). Image completion approaches using the statistics of similar patches. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(12), 2423–2435. CrossRef
Herbrich, R., Graepel, T., & Obermayer, K. (2000). Large margin rank boundaries for ordinal regression (chap. 7 pp. 115–132). Cambridge: MIT Press.
Herling, J., & Broll, W. (2014). High-quality real-time video inpainting with pixmix. IEEE Transactions on Visualization and Computer Graphics, 20(6), 866–879. CrossRef
Hertzmann, A., Jacobs, C. E., Oliver, N., Curless, B., & Salesin, D. H. (2001). Image analogies. In Proceedings of the ACM SIGGRAPH (pp. 327–340).
Huang, J. B., Kang, S. B., Ahuja, N., & Kopf, J. (2014). Image completion using planar structure guidance. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014), 33(4), 129:1–129:10.
Isogawa, M., Mikami, D., Takahashi, K., & Kimata, H. (2017a). Which is the better inpainted image? learning without subjective annotation. In British machine vision conference (BMVC) (pp. 472:1–472:10).
Isogawa, M., Mikami, D., Takahashi, K., & Kojima, A. (2016). Eye gaze analysis and learning-to-rank to obtain the most preferred result in image inpainting. In IEEE international conference on image processing (ICIP) (pp. 3538–3542).
Isogawa, M., Mikami, D., Takahashi, K., & Kojima, A. (2017b). Image and video completion via feature reduction and compensation. Multimedia Tools and Applications, 76, 9443–9462. CrossRef
Khosla, A., Xiao, J., Torralba, A., & Oliva, A. (2012). Memorability of image regions. In Advances in neural information processing systems (NIPS) (pp. 296–304).
Levin, A., Lischinski, D., & Weiss, Y. (2004). Colorization using optimization. In Proceedings of the ACM SIGGRAPH (pp. 689–694).
Liu, X., van de Weijer, J., & Bagdanov, A. D. (2017). Rankiqa: Learning from rankings for no-reference image quality assessment. In IEEE international conference on computer vision (ICCV).
Liu, Z., Luo, P., Wang, X., & Tang, X. (2015). Deep learning face attributes in the wild. In IEEE international conference on computer vision (ICCV).
Oncu, A., Deger, F., & Hardeberg, J. (2012). Evaluation of digital inpainting quality in the context of artwork restoration. In European conference on computer vision (ECCV) workshops and demonstrations (vol. 7583, pp. 561–570). CrossRef
Pishchulin, L., Jain, A., Andriluka, M., Thormählen, T., & Schiele, B. (2012). Articulated people detection and pose estimation: Reshaping the future. IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 3178–3185.
Ros, G., Sellart, L., Materzynska, J., Vazquez, D., & Lopez, A. M. (2016). The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. The IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 3234–3243.
Trung, A. T., Beghdadi, B., & Larabi, C. (2013). Perceptual quality assessment for color image inpainting. In IEEE international conference on image processing (ICIP) (pp. 398–402).
Venkatesh, M. V., & Cheung, S. C. S. (2010). Eye tracking based perceptual image inpainting quality analysis. In Proceedings of the IEEE international conference on image processing (ICIP) (pp. 1109–1112).
Voronin, V. V., Frantc, V. A., Marchuk, V. I., Sherstobitov, A. I., & Egiazarian, K. (2015). No-reference visual quality assessment for image inpainting. Proceedings of the SPIE, 9399, pp. 93990U–93990U-8.
Yan, J., Lin, S., Kang, S. B., & Tang, X. (2014). A learning-to-rank approach for image color enhancement. IEEE Conference on Computer Vision and Pattern Recognition, CVPR, 2987–2994.
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., & Huang, T. S. (2018). Generative image inpainting with contextual attention. In IEEE conference on computer vision and pattern recognition (CVPR).
Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., & Torralba, A. (2018). Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence. Intelligence, 40, 1452–1464.
- Which is the Better Inpainted Image?Training Data Generation Without Any Manual Operations
- Springer US
International Journal of Computer Vision
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
Neuer Inhalt/© ITandMEDIA, Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung/© astrosystem | stock.adobe.com