Skip to main content

2016 | OriginalPaper | Buchkapitel

Focal Flow: Measuring Distance and Velocity with Defocus and Differential Motion

verfasst von : Emma Alexander, Qi Guo, Sanjeev Koppal, Steven Gortler, Todd Zickler

Erschienen in: Computer Vision – ECCV 2016

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We present the focal flow sensor. It is an unactuated, monocular camera that simultaneously exploits defocus and differential motion to measure a depth map and a 3D scene velocity field. It does so using an optical-flow-like, per-pixel linear constraint that relates image derivatives to depth and velocity. We derive this constraint, prove its invariance to scene texture, and prove that it is exactly satisfied only when the sensor’s blur kernels are Gaussian. We analyze the inherent sensitivity of the ideal focal flow sensor, and we build and test a prototype. Experiments produce useful depth and velocity information for a broader set of aperture configurations, including a simple lens with a pillbox aperture.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
Proof. From optical flow, \(k *P_t = k *(-u_1P_x-u_2P_y-u_3(xP_x+yP_y))\). Because \(k *xP_x = x(k *P_x) - (xk *P_x) = x(k *P_x) - (k+xk_x) *P\), then \(k *P_t = -u_1I_x-u_2I_y-u_3(xI_x+yI_y)+u_3(2k+xk_x+yk_y)*P\). Likewise, \(k_t *P = k_\sigma \dot{\sigma } *P \propto \) \( (2k+rk_r) *P\).    \(\square \)
 
2
Proof. Because w is a function of time (and not spatial frequency), \(\hat{m}\) a function of spatial frequency (and not time), and \(\hat{\kappa }\) a function of the time-frequency product \(\sigma \hat{r}\), this equation takes the form \(h_0(xy) = e^{f(x)g(y)}\) or \(h(xy) = \ln h_0 = f(x)g(y)\). Considering \(x=1\) and \(y=1\) in turn, we see that \(g \propto h \propto f\), so that \(f(x)f(y) \propto f(xy)\). Differentiating by x and considering the case \(x=1\) results in the differential equation \(f(y) \propto yf'(y)\), with general solution \(f(y) \propto y^{n}\), equivalently \(y^{2n}\), \(n \in \mathbb {C}\). Realness of v implies \(n\in \mathbb {R}\). Differentiating \(\int _0^{\hat{r}} \frac{\hat{m}(s')}{s'}ds' \propto \hat{r}^{2n}\) yields \(\hat{m} \propto \hat{r}^{2n}\).    \(\square \)
 
3
Proof. From the Fourier Slice Theorem [32, 33], denoting by \(\mathcal {F}_1\) the 1D Fourier transform, we have \(\mathcal {F}_1\left[ \int \kappa (x,y) dy \right] = \hat{\kappa }(\omega _x, 0 ) = e^{-|\omega _x|^{2n}}\). This function is not positive definite for \(n \ge 2\) (which can be seen by taking \(C(n) = \sum \sum z_i z_j e^{-|x_i-x_j|^{2n}}\) for \(z=[1,-2,1]\) and \(x = [-.1^{2n},0,.1^{2n}]\), and noting that both C(2) and \(\frac{dC}{dn}\) are negative), so by Bochner’s theorem it cannot be the (1D) Fourier transform of a finite positive Borel measure. The only property of such a measure that \(\int \kappa \) could lack is non-negativity, so the existence of negative values of \(\kappa \) follows immediately.   \(\square \)
 
Literatur
1.
Zurück zum Zitat Raghavendra, C.S., Sivalingam, K.M., Znati, T.: Wireless Sensor Networks. Springer, Heidelberg (2006)MATH Raghavendra, C.S., Sivalingam, K.M., Znati, T.: Wireless Sensor Networks. Springer, Heidelberg (2006)MATH
2.
Zurück zum Zitat Humber, J.S., Hyslop, A., Chinn, M.: Experimental validation of wide-field integration methods for autonomous navigation. In: Intelligent Robots and Systems (IROS) (2007) Humber, J.S., Hyslop, A., Chinn, M.: Experimental validation of wide-field integration methods for autonomous navigation. In: Intelligent Robots and Systems (IROS) (2007)
3.
Zurück zum Zitat Duhamel, P.E.J., Perez-Arancibia, C.O., Barrows, G.L., Wood, R.J.: Biologically inspired optical-flow sensing for altitude control of flapping-wing microrobots. IEEE/ASME Trans. Mechatron. 18(2), 556–568 (2013)CrossRef Duhamel, P.E.J., Perez-Arancibia, C.O., Barrows, G.L., Wood, R.J.: Biologically inspired optical-flow sensing for altitude control of flapping-wing microrobots. IEEE/ASME Trans. Mechatron. 18(2), 556–568 (2013)CrossRef
4.
Zurück zum Zitat Floreano, D., Zufferey, J.C., Srinivasan, M.V., Ellington, C.: Flying Insects and Robots. Springer, Heidelberg (2009) Floreano, D., Zufferey, J.C., Srinivasan, M.V., Ellington, C.: Flying Insects and Robots. Springer, Heidelberg (2009)
5.
Zurück zum Zitat Koppal, S.J., Gkioulekas, I., Zickler, T., Barrows, G.L.: Wide-angle micro sensors for vision on a tight budget. In: Computer Vision and Pattern Recognition (CVPR) (2011) Koppal, S.J., Gkioulekas, I., Zickler, T., Barrows, G.L.: Wide-angle micro sensors for vision on a tight budget. In: Computer Vision and Pattern Recognition (CVPR) (2011)
6.
Zurück zum Zitat Horn, B.K., Fang, Y., Masaki, I.: Time to contact relative to a planar surface. In: Intelligent Vehicles Symposium (IV) (2007) Horn, B.K., Fang, Y., Masaki, I.: Time to contact relative to a planar surface. In: Intelligent Vehicles Symposium (IV) (2007)
7.
Zurück zum Zitat Horn, B.K., Schunck, B.G.: Determining optical flow. In: 1981 Technical Symposium East, International Society for Optics and Photonics (1981) Horn, B.K., Schunck, B.G.: Determining optical flow. In: 1981 Technical Symposium East, International Society for Optics and Photonics (1981)
8.
Zurück zum Zitat Lee, D.N.: A theory of visual control of braking based on information about time-to-collision. Perception 5, 437–59 (1976)CrossRef Lee, D.N.: A theory of visual control of braking based on information about time-to-collision. Perception 5, 437–59 (1976)CrossRef
9.
Zurück zum Zitat Horn, B.K., Fang, Y., Masaki, I.: Hierarchical framework for direct gradient-based time-to-contact estimation. In: Intelligent Vehicles Symposium (IV) (2009) Horn, B.K., Fang, Y., Masaki, I.: Hierarchical framework for direct gradient-based time-to-contact estimation. In: Intelligent Vehicles Symposium (IV) (2009)
10.
Zurück zum Zitat Grossmann, P.: Depth from focus. Pattern Recogn. Lett. 5(1), 63–69 (1987)CrossRef Grossmann, P.: Depth from focus. Pattern Recogn. Lett. 5(1), 63–69 (1987)CrossRef
11.
Zurück zum Zitat Pentland, A.P.: A new sense for depth of field. Pattern Anal. Mach. Intell. 9(4), 523–531 (1987)CrossRef Pentland, A.P.: A new sense for depth of field. Pattern Anal. Mach. Intell. 9(4), 523–531 (1987)CrossRef
12.
Zurück zum Zitat Subbarao, M., Surya, G.: Depth from defocus: a spatial domain approach. Int. J. Comput. Vis. 13(3), 271–294 (1994)CrossRef Subbarao, M., Surya, G.: Depth from defocus: a spatial domain approach. Int. J. Comput. Vis. 13(3), 271–294 (1994)CrossRef
13.
Zurück zum Zitat Rajagopalan, A., Chaudhuri, S.: Optimal selection of camera parameters for recovery of depth from defocused images. In: Computer Vision and Pattern Recognition (CVPR) (1997) Rajagopalan, A., Chaudhuri, S.: Optimal selection of camera parameters for recovery of depth from defocused images. In: Computer Vision and Pattern Recognition (CVPR) (1997)
14.
Zurück zum Zitat Watanabe, M., Nayar, S.K.: Rational filters for passive depth from defocus. Int. J. Comput. Vis. 27(3), 203–225 (1998)CrossRef Watanabe, M., Nayar, S.K.: Rational filters for passive depth from defocus. Int. J. Comput. Vis. 27(3), 203–225 (1998)CrossRef
15.
Zurück zum Zitat Zhou, C., Lin, S., Nayar, S.: Coded aperture pairs for depth from defocus. In: International Conference on Computer Vision (ICCV) (2009) Zhou, C., Lin, S., Nayar, S.: Coded aperture pairs for depth from defocus. In: International Conference on Computer Vision (ICCV) (2009)
16.
Zurück zum Zitat Levin, A.: Analyzing depth from coded aperture sets. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part I. LNCS, vol. 6311, pp. 214–227. Springer, Heidelberg (2010)CrossRef Levin, A.: Analyzing depth from coded aperture sets. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part I. LNCS, vol. 6311, pp. 214–227. Springer, Heidelberg (2010)CrossRef
17.
Zurück zum Zitat Zhou, C., Lin, S., Nayar, S.K.: Coded aperture pairs for depth from defocus and defocus deblurring. Int. J. Comput. Vis. 93(1), 53–72 (2011)CrossRef Zhou, C., Lin, S., Nayar, S.K.: Coded aperture pairs for depth from defocus and defocus deblurring. Int. J. Comput. Vis. 93(1), 53–72 (2011)CrossRef
18.
Zurück zum Zitat Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and depth from a conventional camera with a coded aperture. In: ACM Transactions on Graphics (TOG) (2007) Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and depth from a conventional camera with a coded aperture. In: ACM Transactions on Graphics (TOG) (2007)
19.
Zurück zum Zitat Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. In: ACM Transactions on Graphics (TOG) (2007) Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing. In: ACM Transactions on Graphics (TOG) (2007)
20.
Zurück zum Zitat Chakrabarti, A., Zickler, T.: Depth and deblurring from a spectrally-varying depth-of-field. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part V. LNCS, vol. 7576, pp. 648–661. Springer, Heidelberg (2012) Chakrabarti, A., Zickler, T.: Depth and deblurring from a spectrally-varying depth-of-field. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part V. LNCS, vol. 7576, pp. 648–661. Springer, Heidelberg (2012)
21.
Zurück zum Zitat Farid, H., Simoncelli, E.P.: Range estimation by optical differentiation. J. Opt. Soc. Am. A 15(7), 1777–1786 (1998)CrossRef Farid, H., Simoncelli, E.P.: Range estimation by optical differentiation. J. Opt. Soc. Am. A 15(7), 1777–1786 (1998)CrossRef
22.
Zurück zum Zitat Myles, Z., da Vitoria Lobo, N.: Recovering affine motion and defocus blur simultaneously. Pattern Anal. Mach. Intell. 20(6), 652–658 (1998)CrossRef Myles, Z., da Vitoria Lobo, N.: Recovering affine motion and defocus blur simultaneously. Pattern Anal. Mach. Intell. 20(6), 652–658 (1998)CrossRef
23.
Zurück zum Zitat Favaro, P., Burger, M., Soatto, S.: Scene and motion reconstruction from defocused and motion-blurred images via anisotropic diffusion. In: Pajdla, T., Matas, J.G. (eds.) ECCV 2004. LNCS, vol. 3021, pp. 257–269. Springer, Heidelberg (2004)CrossRef Favaro, P., Burger, M., Soatto, S.: Scene and motion reconstruction from defocused and motion-blurred images via anisotropic diffusion. In: Pajdla, T., Matas, J.G. (eds.) ECCV 2004. LNCS, vol. 3021, pp. 257–269. Springer, Heidelberg (2004)CrossRef
24.
Zurück zum Zitat Lin, H.Y., Chang, C.H.: Depth from motion and defocus blur. Opt. Eng. 45(12), 127201–127201 (2006)CrossRef Lin, H.Y., Chang, C.H.: Depth from motion and defocus blur. Opt. Eng. 45(12), 127201–127201 (2006)CrossRef
25.
Zurück zum Zitat Seitz, S.M., Baker, S.: Filter flow. In: International Conference on Computer Vision (ICCV) (2009) Seitz, S.M., Baker, S.: Filter flow. In: International Conference on Computer Vision (ICCV) (2009)
26.
Zurück zum Zitat Paramanand, C., Rajagopalan, A.N.: Depth from motion and optical blur with an unscented Kalman filter. IEEE Trans. Image Process. 21(5), 2798–2811 (2012)MathSciNetCrossRef Paramanand, C., Rajagopalan, A.N.: Depth from motion and optical blur with an unscented Kalman filter. IEEE Trans. Image Process. 21(5), 2798–2811 (2012)MathSciNetCrossRef
27.
Zurück zum Zitat Sellent, A., Favaro, P.: Coded aperture flow. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 582–592. Springer, Heidelberg (2014) Sellent, A., Favaro, P.: Coded aperture flow. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 582–592. Springer, Heidelberg (2014)
28.
Zurück zum Zitat Rajagopalan, A., Chaudhuri, S., Mudenagudi, U.: Depth estimation and image restoration using defocused stereo pairs. Pattern Anal. Mach. Intell. 26(11), 1521–1525 (2004)CrossRef Rajagopalan, A., Chaudhuri, S., Mudenagudi, U.: Depth estimation and image restoration using defocused stereo pairs. Pattern Anal. Mach. Intell. 26(11), 1521–1525 (2004)CrossRef
29.
Zurück zum Zitat Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: International Conference on Computer Vision (ICCV) (2013) Tao, M., Hadap, S., Malik, J., Ramamoorthi, R.: Depth from combining defocus and correspondence using light-field cameras. In: International Conference on Computer Vision (ICCV) (2013)
30.
Zurück zum Zitat Rudin, W.: Functional Analysis. McGraw-Hill, New York (1991)MATH Rudin, W.: Functional Analysis. McGraw-Hill, New York (1991)MATH
31.
Zurück zum Zitat Alexander, E., Guo, Q., Koppal, S., Gortler, S., Zickler, T.: Focal flow: supporting material. Technical report TR-01-16, School of Engineering and Applied Science, Harvard University (2016) Alexander, E., Guo, Q., Koppal, S., Gortler, S., Zickler, T.: Focal flow: supporting material. Technical report TR-01-16, School of Engineering and Applied Science, Harvard University (2016)
33.
Zurück zum Zitat Ng, R.: Fourier slice photography. In: ACM Transactions on Graphics (TOG) (2005) Ng, R.: Fourier slice photography. In: ACM Transactions on Graphics (TOG) (2005)
34.
Zurück zum Zitat Schechner, Y.Y., Kiryati, N.: Depth from defocus vs. stereo: how different really are they? Int. J. Comput. Vis. 39(2), 141–162 (2000)CrossRefMATH Schechner, Y.Y., Kiryati, N.: Depth from defocus vs. stereo: how different really are they? Int. J. Comput. Vis. 39(2), 141–162 (2000)CrossRefMATH
35.
Zurück zum Zitat Tai, Y.W., Brown, M.S.: Single image defocus map estimation using local contrast prior. In: International Conference on Image Processing (ICIP) (2009) Tai, Y.W., Brown, M.S.: Single image defocus map estimation using local contrast prior. In: International Conference on Image Processing (ICIP) (2009)
Metadaten
Titel
Focal Flow: Measuring Distance and Velocity with Defocus and Differential Motion
verfasst von
Emma Alexander
Qi Guo
Sanjeev Koppal
Steven Gortler
Todd Zickler
Copyright-Jahr
2016
DOI
https://doi.org/10.1007/978-3-319-46487-9_41