Skip to main content

2017 | OriginalPaper | Buchkapitel

Depth Map Enhancement with Interaction in 2D-to-3D Video Conversion

verfasst von : Tao Yang, Xun Wang, Huiyan Wang, Xiaolan Li

Erschienen in: Transactions on Edutainment XIII

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The demand for 3D video content is growing. Conventional 3D video creation approaches need certain devices to take the 3D videos or lots of people to do the labor-intensive depth labeling work. To reduce the manpower and time consumption, many automatic approaches has been developed to convert legacy 2D videos into 3D. However, due to the strict quality requirements in video production industry, most of the automatic conversion methods are suffered from many quality issues and could not be used in the actual production. As a result manual or semi-automatic 3D video generation approaches are still mainstream 3D video generation technologies. In our project, we took advantage of an automatic video generation method and tried to apply human-computer interactions in its process procedure [1] in the aim to find a balance between time efficiency and depth map generation quality. The novelty of the paper relies on the successful attempt on improving an automatic 3D video generation method in the angle of video and film industry.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Karsch, K., Liu, C., Kang, S.B.: Depth transfer: depth extraction from video using non-parametric sampling. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2144–2158 (2014)CrossRef Karsch, K., Liu, C., Kang, S.B.: Depth transfer: depth extraction from video using non-parametric sampling. IEEE Trans. Pattern Anal. Mach. Intell. 36(11), 2144–2158 (2014)CrossRef
4.
Zurück zum Zitat Zhang, L., Tam, W.J.: Stereoscopic image generation based on depth images for 3D TV. IEEE Trans. Broadcast. 51(2), 191–199 (2005)CrossRef Zhang, L., Tam, W.J.: Stereoscopic image generation based on depth images for 3D TV. IEEE Trans. Broadcast. 51(2), 191–199 (2005)CrossRef
5.
Zurück zum Zitat Liu, B., Gould, S., Koller, D.: Single image depth estimation from predicted semantic labels. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1253–1260. IEEE (2010) Liu, B., Gould, S., Koller, D.: Single image depth estimation from predicted semantic labels. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1253–1260. IEEE (2010)
6.
Zurück zum Zitat Konrad, J., Wang, M., Ishwar, P.: 2D-to-3D image conversion by learning depth from examples (2012) Konrad, J., Wang, M., Ishwar, P.: 2D-to-3D image conversion by learning depth from examples (2012)
7.
Zurück zum Zitat Saxena, A., Sun, M., Ng, A.Y.: Make3D: learning 3D scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 824–840 (2009)CrossRef Saxena, A., Sun, M., Ng, A.Y.: Make3D: learning 3D scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 824–840 (2009)CrossRef
8.
Zurück zum Zitat Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems, pp. 2366–2374 (2014) Eigen, D., Puhrsch, C., Fergus, R.: Depth map prediction from a single image using a multi-scale deep network. In: Advances in Neural Information Processing Systems, pp. 2366–2374 (2014)
9.
Zurück zum Zitat Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42(3), 145–175 (2001)CrossRefMATH Oliva, A., Torralba, A.: Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Comput. Vis. 42(3), 145–175 (2001)CrossRefMATH
10.
Zurück zum Zitat Liu, C.: Beyond pixels: exploring new representations and applications for motion analysis. Ph.D. dissertation. Citeseer (2009) Liu, C.: Beyond pixels: exploring new representations and applications for motion analysis. Ph.D. dissertation. Citeseer (2009)
11.
Zurück zum Zitat Liu, C., Yuen, J., Torralba, A., Sivic, J., Freeman, W.T.: SIFT flow: dense correspondence across different scenes. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5304, pp. 28–42. Springer, Heidelberg (2008). doi:10.1007/978-3-540-88690-7_3 CrossRef Liu, C., Yuen, J., Torralba, A., Sivic, J., Freeman, W.T.: SIFT flow: dense correspondence across different scenes. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5304, pp. 28–42. Springer, Heidelberg (2008). doi:10.​1007/​978-3-540-88690-7_​3 CrossRef
12.
Zurück zum Zitat Karsch, K., Liu, C., Kang, S.B.: Depth extraction from video using non-parametric sampling (2012) Karsch, K., Liu, C., Kang, S.B.: Depth extraction from video using non-parametric sampling (2012)
13.
Zurück zum Zitat Pietikainen, M., Heikkila, M.: A texture-based method for modeling the back-ground and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657–662 (2006)CrossRef Pietikainen, M., Heikkila, M.: A texture-based method for modeling the back-ground and detecting moving objects. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 657–662 (2006)CrossRef
14.
Zurück zum Zitat Deschamps, A., Howe, N.R.: Better foreground segmentation through graph cuts (2004) Deschamps, A., Howe, N.R.: Better foreground segmentation through graph cuts (2004)
15.
Zurück zum Zitat Behnke, S., Stuckler, J.: Efficient dense rigid-body motion segmentation and estimation in RGBD video. Int. J. Comput. Vis. 113(3), 233–245 (2015)MathSciNetCrossRef Behnke, S., Stuckler, J.: Efficient dense rigid-body motion segmentation and estimation in RGBD video. Int. J. Comput. Vis. 113(3), 233–245 (2015)MathSciNetCrossRef
Metadaten
Titel
Depth Map Enhancement with Interaction in 2D-to-3D Video Conversion
verfasst von
Tao Yang
Xun Wang
Huiyan Wang
Xiaolan Li
Copyright-Jahr
2017
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-662-54395-5_16