Skip to main content

2016 | OriginalPaper | Buchkapitel

Deep Specialized Network for Illuminant Estimation

verfasst von : Wu Shi, Chen Change Loy, Xiaoou Tang

Erschienen in: Computer Vision – ECCV 2016

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Illuminant estimation to achieve color constancy is an ill-posed problem. Searching the large hypothesis space for an accurate illuminant estimation is hard due to the ambiguities of unknown reflections and local patch appearances. In this work, we propose a novel Deep Specialized Network (DS-Net) that is adaptive to diverse local regions for estimating robust local illuminants. This is achieved through a new convolutional network architecture with two interacting sub-networks, i.e. an hypotheses network (HypNet) and a selection network (SelNet). In particular, HypNet generates multiple illuminant hypotheses that inherently capture different modes of illuminants with its unique two-branch structure. SelNet then adaptively picks for confident estimations from these plausible hypotheses. Extensive experiments on the two largest color constancy benchmark datasets show that the proposed ‘hypothesis selection’ approach is effective to overcome erroneous estimation. Through the synergy of HypNet and SelNet, our approach outperforms state-of-the-art methods such as [13].

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
As suggested by [3, 41], the log-chrominance formulation is advantageous over the RGB formulation in that we have 2 unknown instead of 3, and R and I are related by simple linear constraint instead of a multiplicative constraint.
 
2
We have tried more branches, but for this problem using more branches does not bring significant improvement. For efficiency and clarity, we present the two-branch version here.
 
3
Implemented using Caffe [45].
 
4
Despite the loss we use does not directly optimize the angular error typically employed in color constancy evaluation, satisfactory results are still observed.
 
5
This scheme is also related to the Multiple Choice Learning [54].
 
Literatur
1.
Zurück zum Zitat Cheng, D., Price, B., Cohen, S., Brown, M.S.: Effective learning-based illuminant estimation using simple features. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1000–1008 (2015) Cheng, D., Price, B., Cohen, S., Brown, M.S.: Effective learning-based illuminant estimation using simple features. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1000–1008 (2015)
2.
Zurück zum Zitat Bianco, S., Cusano, C., Schettini, R.: Single and multiple illuminant estimation using convolutional neural networks. arXiv preprint (2015). arXiv:1508.00998 Bianco, S., Cusano, C., Schettini, R.: Single and multiple illuminant estimation using convolutional neural networks. arXiv preprint (2015). arXiv:​1508.​00998
3.
Zurück zum Zitat Barron, J.T.: Convolutional color constancy. In: IEEE International Conference on Computer Vision, pp. 379–387 (2015) Barron, J.T.: Convolutional color constancy. In: IEEE International Conference on Computer Vision, pp. 379–387 (2015)
4.
Zurück zum Zitat Bianco, S., Cusano, C., Schettini, R.: Color constancy using CNNs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 81–89 (2015) Bianco, S., Cusano, C., Schettini, R.: Color constancy using CNNs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 81–89 (2015)
5.
Zurück zum Zitat Lou, Z., Gevers, T., Hu, N., Lucassen, M.: Color constancy by deep learning. In: British Machine Vision Conference (2015) Lou, Z., Gevers, T., Hu, N., Lucassen, M.: Color constancy by deep learning. In: British Machine Vision Conference (2015)
6.
Zurück zum Zitat Buchsbaum, G.: A spatial processor model for object colour perception. J. Franklin Inst. 310(1), 1–26 (1980)MathSciNetCrossRef Buchsbaum, G.: A spatial processor model for object colour perception. J. Franklin Inst. 310(1), 1–26 (1980)MathSciNetCrossRef
7.
Zurück zum Zitat Land, E.H., McCann, J.J.: Lightness and retinex theory. J. Opt. Soc. Am. A 61(1), 1–11 (1971)CrossRef Land, E.H., McCann, J.J.: Lightness and retinex theory. J. Opt. Soc. Am. A 61(1), 1–11 (1971)CrossRef
8.
Zurück zum Zitat Gao, S., Han, W., Yang, K., Li, C., Li, Y.: Efficient color constancy with local surface reflectance statistics. In: European Conference on Computer Vision, pp. 158–173 (2014) Gao, S., Han, W., Yang, K., Li, C., Li, Y.: Efficient color constancy with local surface reflectance statistics. In: European Conference on Computer Vision, pp. 158–173 (2014)
9.
Zurück zum Zitat Gao, S., Yang, K., Li, C., Li, Y.: A color constancy model with double-opponency mechanisms. In: IEEE International Conference on Computer Vision, pp. 929–936 (2013) Gao, S., Yang, K., Li, C., Li, Y.: A color constancy model with double-opponency mechanisms. In: IEEE International Conference on Computer Vision, pp. 929–936 (2013)
10.
Zurück zum Zitat Gijsenij, A., Gevers, T.: Color constancy using natural image statistics and scene semantics. IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 687–698 (2011)CrossRef Gijsenij, A., Gevers, T.: Color constancy using natural image statistics and scene semantics. IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 687–698 (2011)CrossRef
11.
Zurück zum Zitat Gijsenij, A., Gevers, T., Van De Weijer, J.: Improving color constancy by photometric edge weighting. IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 918–929 (2012)CrossRef Gijsenij, A., Gevers, T., Van De Weijer, J.: Improving color constancy by photometric edge weighting. IEEE Trans. Pattern Anal. Mach. Intell. 34(5), 918–929 (2012)CrossRef
12.
Zurück zum Zitat Van De Weijer, J., Gevers, T., Gijsenij, A.: Edge-based color constancy. IEEE Trans. Image Process. 16(9), 2207–2214 (2007)MathSciNetCrossRef Van De Weijer, J., Gevers, T., Gijsenij, A.: Edge-based color constancy. IEEE Trans. Image Process. 16(9), 2207–2214 (2007)MathSciNetCrossRef
13.
Zurück zum Zitat Bianco, S., Ciocca, G., Cusano, C., Schettini, R.: Improving color constancy using indoor-outdoor image classification. IEEE Trans. Image Process. 17(12), 2381–2392 (2008)MathSciNetCrossRef Bianco, S., Ciocca, G., Cusano, C., Schettini, R.: Improving color constancy using indoor-outdoor image classification. IEEE Trans. Image Process. 17(12), 2381–2392 (2008)MathSciNetCrossRef
14.
Zurück zum Zitat Bianco, S., Ciocca, G., Cusano, C., Schettini, R.: Automatic color constancy algorithm selection and combination. Pattern Recogn. 43(3), 695–705 (2010)CrossRefMATH Bianco, S., Ciocca, G., Cusano, C., Schettini, R.: Automatic color constancy algorithm selection and combination. Pattern Recogn. 43(3), 695–705 (2010)CrossRefMATH
15.
Zurück zum Zitat Drew, M.S., Funt, B.V.: Variational approach to interreflection in color images. J. Opt. Soc. Am. A 9(8), 1255–1265 (1992)CrossRef Drew, M.S., Funt, B.V.: Variational approach to interreflection in color images. J. Opt. Soc. Am. A 9(8), 1255–1265 (1992)CrossRef
16.
Zurück zum Zitat Drew, M.S., Joze, H.R.V., Finlayson, G.D.: Specularity, the zeta-image, and information-theoretic illuminant estimation. In: European Conference on Computer Vision Workshop, pp. 411–420 (2012) Drew, M.S., Joze, H.R.V., Finlayson, G.D.: Specularity, the zeta-image, and information-theoretic illuminant estimation. In: European Conference on Computer Vision Workshop, pp. 411–420 (2012)
17.
Zurück zum Zitat Lee, H.C.: Method for computing the scene-illuminant chromaticity from specular highlights. J. Opt. Soc. Am. A 3(10), 1694–1699 (1986)CrossRef Lee, H.C.: Method for computing the scene-illuminant chromaticity from specular highlights. J. Opt. Soc. Am. A 3(10), 1694–1699 (1986)CrossRef
18.
Zurück zum Zitat Tan, R.T., Nishino, K., Ikeuchi, K.: Color constancy through inverse-intensity chromaticity space. J. Opt. Soc. Am. A 21(3), 321–334 (2004)CrossRef Tan, R.T., Nishino, K., Ikeuchi, K.: Color constancy through inverse-intensity chromaticity space. J. Opt. Soc. Am. A 21(3), 321–334 (2004)CrossRef
19.
Zurück zum Zitat Hordley, S.D.: Scene illuminant estimation: past, present, and future. Color Res. Appl. 31(4), 303–314 (2006)CrossRef Hordley, S.D.: Scene illuminant estimation: past, present, and future. Color Res. Appl. 31(4), 303–314 (2006)CrossRef
20.
Zurück zum Zitat Gijsenij, A., Gevers, T., Van De Weijer, J.: Computational color constancy: survey and experiments. IEEE Trans. Image Process. 20(9), 2475–2489 (2011)MathSciNetCrossRef Gijsenij, A., Gevers, T., Van De Weijer, J.: Computational color constancy: survey and experiments. IEEE Trans. Image Process. 20(9), 2475–2489 (2011)MathSciNetCrossRef
21.
Zurück zum Zitat Cardei, V.C., Funt, B., Barnard, K.: Estimating the scene illumination chromaticity by using a neural network. J. Opt. Soc. Am. A 19(12), 2374–2386 (2002)CrossRef Cardei, V.C., Funt, B., Barnard, K.: Estimating the scene illumination chromaticity by using a neural network. J. Opt. Soc. Am. A 19(12), 2374–2386 (2002)CrossRef
22.
Zurück zum Zitat Finlayson, G.D., Hordley, S.D., Hubel, P.M.: Color by correlation: a simple, unifying framework for color constancy. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1209–1221 (2001)CrossRef Finlayson, G.D., Hordley, S.D., Hubel, P.M.: Color by correlation: a simple, unifying framework for color constancy. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1209–1221 (2001)CrossRef
23.
Zurück zum Zitat Funt, B., Xiong, W.: Estimating illumination chromaticity via support vector regression. In: Color and Imaging Conference, vol. 2004, pp. 47–52 (2004) Funt, B., Xiong, W.: Estimating illumination chromaticity via support vector regression. In: Color and Imaging Conference, vol. 2004, pp. 47–52 (2004)
24.
Zurück zum Zitat Rosenberg, C., Hebert, M., Thrun, S.: Color constancy using KL-divergence. In: IEEE International Conference on Computer Vision, vol. 1, pp. 239–246 (2001) Rosenberg, C., Hebert, M., Thrun, S.: Color constancy using KL-divergence. In: IEEE International Conference on Computer Vision, vol. 1, pp. 239–246 (2001)
25.
Zurück zum Zitat Gehler, P.V., Rother, C., Blake, A., Minka, T., Sharp, T.: Bayesian color constancy revisited. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008) Gehler, P.V., Rother, C., Blake, A., Minka, T., Sharp, T.: Bayesian color constancy revisited. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8 (2008)
26.
Zurück zum Zitat Finlayson, G.: Corrected-moment illuminant estimation. In: IEEE International Conference on Computer Vision, pp. 1904–1911 (2013) Finlayson, G.: Corrected-moment illuminant estimation. In: IEEE International Conference on Computer Vision, pp. 1904–1911 (2013)
27.
Zurück zum Zitat Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 2892–2900 (2015) Sun, Y., Wang, X., Tang, X.: Deeply learned face representations are sparse, selective, and robust. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 2892–2900 (2015)
28.
Zurück zum Zitat Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint (2016). arXiv:1602.07261 Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint (2016). arXiv:​1602.​07261
29.
Zurück zum Zitat Zhu, S., Liu, S., Loy, C.C., Tang, X.: Deep cascaded bi-network for face hallucination. In: European Conference on Computer Vision (2016) Zhu, S., Liu, S., Loy, C.C., Tang, X.: Deep cascaded bi-network for face hallucination. In: European Conference on Computer Vision (2016)
30.
Zurück zum Zitat Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)CrossRef Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)CrossRef
31.
Zurück zum Zitat Cui, Z., Chang, H., Shan, S., Zhong, B., Chen, X.: Deep network cascade for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 49–64. Springer, Heidelberg (2014). doi:10.1007/978-3-319-10602-1_4 Cui, Z., Chang, H., Shan, S., Zhong, B., Chen, X.: Deep network cascade for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 49–64. Springer, Heidelberg (2014). doi:10.​1007/​978-3-319-10602-1_​4
32.
Zurück zum Zitat Xu, L., Ren, J.S., Liu, C., Jia, J.: Deep convolutional neural network for image deconvolution. In: Advances in Neural Information Processing Systems, pp. 1790–1798 (2014) Xu, L., Ren, J.S., Liu, C., Jia, J.: Deep convolutional neural network for image deconvolution. In: Advances in Neural Information Processing Systems, pp. 1790–1798 (2014)
33.
Zurück zum Zitat Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: European Conference on Computer Vision (2016) Dong, C., Loy, C.C., Tang, X.: Accelerating the super-resolution convolutional neural network. In: European Conference on Computer Vision (2016)
34.
Zurück zum Zitat Hui, T.W., Loy, C.C., Tang, X.: Depth map super resolution by deep multi-scale guidance. In: European Conference on Computer Vision (2016) Hui, T.W., Loy, C.C., Tang, X.: Depth map super resolution by deep multi-scale guidance. In: European Conference on Computer Vision (2016)
35.
Zurück zum Zitat Xie, J., Xu, L., Chen, E.: Image denoising and inpainting with deep neural networks. In: Advances in Neural Information Processing Systems, pp. 341–349 (2012) Xie, J., Xu, L., Chen, E.: Image denoising and inpainting with deep neural networks. In: Advances in Neural Information Processing Systems, pp. 341–349 (2012)
36.
Zurück zum Zitat Joze, H.R.V., Drew, M.S.: Exemplar-based color constancy and multiple illumination. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 860–873 (2014)CrossRef Joze, H.R.V., Drew, M.S.: Exemplar-based color constancy and multiple illumination. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 860–873 (2014)CrossRef
37.
Zurück zum Zitat Bianco, S., Schettini, R.: Adaptive color constancy using faces. IEEE Trans. Pattern Anal. Mach. Intell. 36(8), 1505–1518 (2014)CrossRef Bianco, S., Schettini, R.: Adaptive color constancy using faces. IEEE Trans. Pattern Anal. Mach. Intell. 36(8), 1505–1518 (2014)CrossRef
38.
Zurück zum Zitat Hsu, E., Mertens, T., Paris, S., Avidan, S., Durand, F.: Light mixture estimation for spatially varying white balance. ACM Trans. Graph. 27, 70 (2008)CrossRef Hsu, E., Mertens, T., Paris, S., Avidan, S., Durand, F.: Light mixture estimation for spatially varying white balance. ACM Trans. Graph. 27, 70 (2008)CrossRef
39.
Zurück zum Zitat Boyadzhiev, I., Bala, K., Paris, S., Durand, F.: User-guided white balance for mixed lighting conditions. ACM Trans. Graph. 31(6), 200 (2012)CrossRef Boyadzhiev, I., Bala, K., Paris, S., Durand, F.: User-guided white balance for mixed lighting conditions. ACM Trans. Graph. 31(6), 200 (2012)CrossRef
40.
Zurück zum Zitat Brainard, D.H., Wandell, B.A.: Analysis of the retinex theory of color vision. J. Opt. Soc. Am. A 3(10), 1651–1661 (1986)CrossRef Brainard, D.H., Wandell, B.A.: Analysis of the retinex theory of color vision. J. Opt. Soc. Am. A 3(10), 1651–1661 (1986)CrossRef
41.
42.
Zurück zum Zitat Chen, W., Er, M.J., Wu, S.: Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 36(2), 458–466 (2006)CrossRef Chen, W., Er, M.J., Wu, S.: Illumination compensation and normalization for robust face recognition using discrete cosine transform in logarithm domain. IEEE Trans. Syst. Man Cybern. Part B: Cybern. 36(2), 458–466 (2006)CrossRef
43.
Zurück zum Zitat Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: International Conference on Machine Learning, pp. 807–814 (2010) Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: International Conference on Machine Learning, pp. 807–814 (2010)
45.
Zurück zum Zitat Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: ACM Multimedia, pp. 675–678 (2014) Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: ACM Multimedia, pp. 675–678 (2014)
47.
Zurück zum Zitat Cheng, D., Prasad, D.K., Brown, M.S.: Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution. J. Opt. Soc. Am. A 31(5), 1049–1058 (2014)CrossRef Cheng, D., Prasad, D.K., Brown, M.S.: Illuminant estimation for color constancy: why spatial-domain methods work and the role of the color distribution. J. Opt. Soc. Am. A 31(5), 1049–1058 (2014)CrossRef
49.
Zurück zum Zitat Finlayson, G.D., Trezzi, E.: Shades of gray and colour constancy. In: Color and Imaging Conference, vol. 2004, pp. 37–41 (2004) Finlayson, G.D., Trezzi, E.: Shades of gray and colour constancy. In: Color and Imaging Conference, vol. 2004, pp. 37–41 (2004)
50.
Zurück zum Zitat Barnard, K., Martin, L., Coath, A., Fun, B.: A comparison of computational color constancy algorithms. II. Experiments with image data. IEEE Trans. Image Process. 11(9), 985–996 (2002)CrossRef Barnard, K., Martin, L., Coath, A., Fun, B.: A comparison of computational color constancy algorithms. II. Experiments with image data. IEEE Trans. Image Process. 11(9), 985–996 (2002)CrossRef
51.
Zurück zum Zitat Joze, H.R.V., Drew, M.S., Finlayson, G.D., Rey, P.A.T.: The role of bright pixels in illumination estimation. In: Color and Imaging Conference, vol. 2012, pp. 41–46 (2012) Joze, H.R.V., Drew, M.S., Finlayson, G.D., Rey, P.A.T.: The role of bright pixels in illumination estimation. In: Color and Imaging Conference, vol. 2012, pp. 41–46 (2012)
52.
Zurück zum Zitat Chakrabarti, A., Hirakawa, K., Zickler, T.: Color constancy with spatio-spectral statistics. IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1509–1519 (2012)CrossRef Chakrabarti, A., Hirakawa, K., Zickler, T.: Color constancy with spatio-spectral statistics. IEEE Trans. Pattern Anal. Mach. Intell. 34(8), 1509–1519 (2012)CrossRef
53.
Zurück zum Zitat Gijsenij, A., Lu, R., Gevers, T.: Color constancy for multiple light sources. IEEE Trans. Image Process. 21(2), 697–707 (2012)MathSciNetCrossRef Gijsenij, A., Lu, R., Gevers, T.: Color constancy for multiple light sources. IEEE Trans. Image Process. 21(2), 697–707 (2012)MathSciNetCrossRef
Metadaten
Titel
Deep Specialized Network for Illuminant Estimation
verfasst von
Wu Shi
Chen Change Loy
Xiaoou Tang
Copyright-Jahr
2016
DOI
https://doi.org/10.1007/978-3-319-46493-0_23