Skip to main content
Erschienen in: International Journal of Computer Vision 3/2020

22.08.2019

EKLT: Asynchronous Photometric Feature Tracking Using Events and Frames

verfasst von: Daniel Gehrig, Henri Rebecq, Guillermo Gallego, Davide Scaramuzza

Erschienen in: International Journal of Computer Vision | Ausgabe 3/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We present EKLT, a feature tracking method that leverages the complementarity of event cameras and standard cameras to track visual features with high temporal resolution. Event cameras are novel sensors that output pixel-level brightness changes, called “events”. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide updates with high temporal resolution. In contrast to previous works, which are based on heuristics, this is the first principled method that uses intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are more accurate than the state of the art, across a wide variety of scenes.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
Event cameras such as the DVS (Lichtsteiner et al. 2008) respond to logarithmic brightness changes, i.e., \(L\doteq \log I\), with brightness signal I, so that (1) represents logarithmic changes.
 
2
Eq. (3) can be shown (Gallego et al. 2015) by substituting the brightness constancy assumption (i.e., optical flow constraint) \( \frac{\partial L}{\partial t}(\mathbf {u}(t),t) + \nabla L(\mathbf {u}(t),t) \cdot \dot{\mathbf {u}}(t) = 0, \) with image-point velocity \(\mathbf {v}\equiv \dot{\mathbf {u}}\), in Taylor’s approximation \(\Delta L(\mathbf {u},t) \doteq L(\mathbf {u},t) - L(\mathbf {u},t - \Delta \tau ) \approx \frac{\partial L}{\partial t}(\mathbf {u},t) \Delta \tau \).
 
Literatur
Zurück zum Zitat Alzugaray, I., & Chli, M. (2018). Asynchronous corner detection and tracking for event cameras in real time. IEEE Robotics and Automation Letters, 3(4), 3177–3184.CrossRef Alzugaray, I., & Chli, M. (2018). Asynchronous corner detection and tracking for event cameras in real time. IEEE Robotics and Automation Letters, 3(4), 3177–3184.CrossRef
Zurück zum Zitat Baker, S., & Matthews, I. (2004). Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision, 56(3), 221–255.CrossRef Baker, S., & Matthews, I. (2004). Lucas-kanade 20 years on: A unifying framework. International Journal of Computer Vision, 56(3), 221–255.CrossRef
Zurück zum Zitat Bardow, P., Davison, A. J., & Leutenegger, S. Simultaneous optical flow and intensity estimation from an event camera. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 884–892). Bardow, P., Davison, A. J., & Leutenegger, S. Simultaneous optical flow and intensity estimation from an event camera. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 884–892).
Zurück zum Zitat Barranco, F., Teo, CL., Fermuller, C., & Aloimonos, Y. (2015). Contour detection and characterization for asynchronous event sensors. In International conference on computer and vision (ICCV). Barranco, F., Teo, CL., Fermuller, C., & Aloimonos, Y. (2015). Contour detection and characterization for asynchronous event sensors. In International conference on computer and vision (ICCV).
Zurück zum Zitat Benosman, R., Ieng, S.-H., Clercq, C., Bartolozzi, C., & Srinivasan, M. (2012). Asynchronous frameless event-based optical flow. Neural Networks, 27, 32–37.CrossRef Benosman, R., Ieng, S.-H., Clercq, C., Bartolozzi, C., & Srinivasan, M. (2012). Asynchronous frameless event-based optical flow. Neural Networks, 27, 32–37.CrossRef
Zurück zum Zitat Besl, P. J., & McKay, N. D. (1992). A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis & Machine Intelligence, 14(2), 239–256.CrossRef Besl, P. J., & McKay, N. D. (1992). A method for registration of 3-D shapes. IEEE Transactions on Pattern Analysis & Machine Intelligence, 14(2), 239–256.CrossRef
Zurück zum Zitat Brandli, C., Berner, R., Yang, M., Liu, S.-C., & Delbruck, T. (2014). A 240 \(\times \) 180 130 dB 3us latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits, 49(10), 2333–2341.CrossRef Brandli, C., Berner, R., Yang, M., Liu, S.-C., & Delbruck, T. (2014). A 240 \(\times \) 180 130 dB 3us latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits, 49(10), 2333–2341.CrossRef
Zurück zum Zitat Bryner, S., Gallego, G., Rebecq, H., & Scaramuzza, D. (2019). Event-based, direct camera tracking from a photometric 3D map using nonlinear optimization. In IEEE international conference on robotics and automation (ICRA). Bryner, S., Gallego, G., Rebecq, H., & Scaramuzza, D. (2019). Event-based, direct camera tracking from a photometric 3D map using nonlinear optimization. In IEEE international conference on robotics and automation (ICRA).
Zurück zum Zitat Chaudhry, R., Ravichandran, A., Hager, G., & Vidal, R. Histograms of oriented optical flow and Binet–Cauchy kernels on nonlinear dynamical systems for the recognition of human actions. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1932–1939). Chaudhry, R., Ravichandran, A., Hager, G., & Vidal, R. Histograms of oriented optical flow and Binet–Cauchy kernels on nonlinear dynamical systems for the recognition of human actions. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1932–1939).
Zurück zum Zitat Clady, X., Ieng, S.-H., & Benosman, R. (2015). Asynchronous event-based corner detection and matching. Neural Networks, 66, 91–106.CrossRef Clady, X., Ieng, S.-H., & Benosman, R. (2015). Asynchronous event-based corner detection and matching. Neural Networks, 66, 91–106.CrossRef
Zurück zum Zitat Clady, X., Maro, J.-M., Barré, S., & Benosman, R. B. (2017). A motion-based feature for event-based pattern recognition. Frontiers in Neuroscience, 10, 594.CrossRef Clady, X., Maro, J.-M., Barré, S., & Benosman, R. B. (2017). A motion-based feature for event-based pattern recognition. Frontiers in Neuroscience, 10, 594.CrossRef
Zurück zum Zitat Delmerico, J., Cieslewski, T., Rebecq, H., Faessler, M., & Scaramuzza, D. (2019). Are we ready for autonomous drone racing?. In IEEE international conference on robotics and automation (ICRA). The UZH-FPV Drone Racing Dataset. Delmerico, J., Cieslewski, T., Rebecq, H., Faessler, M., & Scaramuzza, D. (2019). Are we ready for autonomous drone racing?. In IEEE international conference on robotics and automation (ICRA). The UZH-FPV Drone Racing Dataset.
Zurück zum Zitat Evangelidis, G. D., & Psarakis, E. Z. (2008). Parametric image alignment using enhanced correlation coefficient maximization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(10), 1858–1865.CrossRef Evangelidis, G. D., & Psarakis, E. Z. (2008). Parametric image alignment using enhanced correlation coefficient maximization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(10), 1858–1865.CrossRef
Zurück zum Zitat Forster, C., Zhang, Z., Gassner, M., Werlberger, M., & Scaramuzza, D. (2017). SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics, 33(2), 249–265.CrossRef Forster, C., Zhang, Z., Gassner, M., Werlberger, M., & Scaramuzza, D. (2017). SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics, 33(2), 249–265.CrossRef
Zurück zum Zitat Gallego, G., Delbruck, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., et al. (2019). Event-based vision: A survey. arXiv:1904.08405. Gallego, G., Delbruck, T., Orchard, G., Bartolozzi, C., Taba, B., Censi, A., et al. (2019). Event-based vision: A survey. arXiv:​1904.​08405.
Zurück zum Zitat Gallego, G., Forster, C., Mueggler, E., & Scaramuzza, D. (2015). Event-based camera pose tracking using a generative event model. arXiv:1510.01972. Gallego, G., Forster, C., Mueggler, E., & Scaramuzza, D. (2015). Event-based camera pose tracking using a generative event model. arXiv:​1510.​01972.
Zurück zum Zitat Gallego, G., Lund, J. E. A., Mueggler, E., Rebecq, H., Delbruck, T., & Scaramuzza, D. (2018). Event-based, 6-DOF camera tracking from photometric depth maps. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(10), 2402–2412.CrossRef Gallego, G., Lund, J. E. A., Mueggler, E., Rebecq, H., Delbruck, T., & Scaramuzza, D. (2018). Event-based, 6-DOF camera tracking from photometric depth maps. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(10), 2402–2412.CrossRef
Zurück zum Zitat Gallego, G., Rebecq, H., & Scaramuzza, D. (2018). A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3867–3876). Gallego, G., Rebecq, H., & Scaramuzza, D. (2018). A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3867–3876).
Zurück zum Zitat Gallego, G., & Scaramuzza, D. (2017). Accurate angular velocity estimation with an event camera. IEEE Robotics and Automation Letters, 2(2), 632–639.CrossRef Gallego, G., & Scaramuzza, D. (2017). Accurate angular velocity estimation with an event camera. IEEE Robotics and Automation Letters, 2(2), 632–639.CrossRef
Zurück zum Zitat Gehrig, D., Rebecq, H., Gallego, G., & Scaramuzza, D. (2018). Asynchronous, photometric feature tracking using events and frames. In European conference on computer vision (ECCV) (pp. 766–781). Gehrig, D., Rebecq, H., Gallego, G., & Scaramuzza, D. (2018). Asynchronous, photometric feature tracking using events and frames. In European conference on computer vision (ECCV) (pp. 766–781).
Zurück zum Zitat Harris, C., & Stephens, M. (1988). A combined corner and edge detector. In Proceedings of the fourth alvey vision conference (Vol. 15, pp. 147–151). Harris, C., & Stephens, M. (1988). A combined corner and edge detector. In Proceedings of the fourth alvey vision conference (Vol. 15, pp. 147–151).
Zurück zum Zitat Kim, H., Handa, A., Benosman, R., Ieng, S.-H., & Davison, A. J. (2014). Simultaneous mosaicing and tracking with an event camera. In British machine vision conference (BMVC). Kim, H., Handa, A., Benosman, R., Ieng, S.-H., & Davison, A. J. (2014). Simultaneous mosaicing and tracking with an event camera. In British machine vision conference (BMVC).
Zurück zum Zitat Klein, G., & Murray, D. (2009). Parallel tracking and mapping on a camera phone. In IEEE ACM international symposium mixed and augmented reality (ISMAR). Klein, G., & Murray, D. (2009). Parallel tracking and mapping on a camera phone. In IEEE ACM international symposium mixed and augmented reality (ISMAR).
Zurück zum Zitat Kogler, J., Sulzbachner, C., Humenberger, M., & Eibensteiner, F. Address-event based stereo vision with bio-inspired silicon retina imagers. In Advances in theory and applications of stereo vision (pp. 165–188). InTech. Kogler, J., Sulzbachner, C., Humenberger, M., & Eibensteiner, F. Address-event based stereo vision with bio-inspired silicon retina imagers. In Advances in theory and applications of stereo vision (pp. 165–188). InTech.
Zurück zum Zitat Kueng, B., Mueggler, E., Gallego, G., & Scaramuzza, D. (2016). Low-latency visual odometry using event-based feature tracks. In IEEE international conference on intelligent robots and systems (IROS) (pp. 16–23). Kueng, B., Mueggler, E., Gallego, G., & Scaramuzza, D. (2016). Low-latency visual odometry using event-based feature tracks. In IEEE international conference on intelligent robots and systems (IROS) (pp. 16–23).
Zurück zum Zitat Lagorce, X., Meyer, C., Ieng, S.-H., Filliat, D., & Benosman, R. (2015). Asynchronous event-based multikernel algorithm for high-speed visual features tracking. IEEE Transactions on Neural Networks and Learning Systems, 26(8), 1710–1720.MathSciNetCrossRef Lagorce, X., Meyer, C., Ieng, S.-H., Filliat, D., & Benosman, R. (2015). Asynchronous event-based multikernel algorithm for high-speed visual features tracking. IEEE Transactions on Neural Networks and Learning Systems, 26(8), 1710–1720.MathSciNetCrossRef
Zurück zum Zitat Lichtsteiner, P., Posch, C., & Delbruck, T. (2008). A 128\(\times \)128 120 dB 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 43(2), 566–576.CrossRef Lichtsteiner, P., Posch, C., & Delbruck, T. (2008). A 128\(\times \)128 120 dB 15 \(\mu \)s latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits, 43(2), 566–576.CrossRef
Zurück zum Zitat Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. In International joint conference on artificial intelligence (IJCAI) (pp. 674–679). Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision. In International joint conference on artificial intelligence (IJCAI) (pp. 674–679).
Zurück zum Zitat Maqueda, A. I., Loquercio, A., Gallego, G., García, N., & Scaramuzza, D. (2018). Event-based vision meets deep learning on steering prediction for self-driving cars. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5419–5427). Maqueda, A. I., Loquercio, A., Gallego, G., García, N., & Scaramuzza, D. (2018). Event-based vision meets deep learning on steering prediction for self-driving cars. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 5419–5427).
Zurück zum Zitat Mueggler, E., Bartolozzi, C., & Scaramuzza, D. (2017). Fast event-based corner detection. In British machine vision conference (BMVC). Mueggler, E., Bartolozzi, C., & Scaramuzza, D. (2017). Fast event-based corner detection. In British machine vision conference (BMVC).
Zurück zum Zitat Mueggler, E., Huber, B., & Scaramuzza, D. (2014). Event-based, 6-DOF pose tracking for high-speed maneuvers. In IEEE international conference on intelligent robots and systems (IROS) (pp. 2761–2768). Event camera animation: https://youtu.be/LauQ6LWTkxM?t=25. Mueggler, E., Huber, B., & Scaramuzza, D. (2014). Event-based, 6-DOF pose tracking for high-speed maneuvers. In IEEE international conference on intelligent robots and systems (IROS) (pp. 2761–2768). Event camera animation: https://​youtu.​be/​LauQ6LWTkxM?​t=​25.
Zurück zum Zitat Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., & Scaramuzza, D. (2017). The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. The International Journal of Robotics Research, 36(2), 142–149.CrossRef Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., & Scaramuzza, D. (2017). The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. The International Journal of Robotics Research, 36(2), 142–149.CrossRef
Zurück zum Zitat Munda, G., Reinbacher, C., & Pock, T. (2018). Real-time intensity-image reconstruction for event cameras using manifold regularisation. International Journal of Computer Vision, 126(12), 1381–1393.CrossRef Munda, G., Reinbacher, C., & Pock, T. (2018). Real-time intensity-image reconstruction for event cameras using manifold regularisation. International Journal of Computer Vision, 126(12), 1381–1393.CrossRef
Zurück zum Zitat Mur-Artal, R., Montiel, J. M. M., & Tardós, J. D. (2015). ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5), 1147–1163.CrossRef Mur-Artal, R., Montiel, J. M. M., & Tardós, J. D. (2015). ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5), 1147–1163.CrossRef
Zurück zum Zitat Ni, Z., Bolopion, A., Agnus, J., Benosman, R., & Régnier, S. (2012). Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics. IEEE Transactions on Robotics, 28(5), 1081–1089.CrossRef Ni, Z., Bolopion, A., Agnus, J., Benosman, R., & Régnier, S. (2012). Asynchronous event-based visual shape tracking for stable haptic feedback in microrobotics. IEEE Transactions on Robotics, 28(5), 1081–1089.CrossRef
Zurück zum Zitat Ni, Z., Ieng, S.-H., Posch, C., Régnier, S., & Benosman, R. (2015). Visual tracking using neuromorphic asynchronous event-based cameras. Neural Computation, 27(4), 925–953.MathSciNetCrossRef Ni, Z., Ieng, S.-H., Posch, C., Régnier, S., & Benosman, R. (2015). Visual tracking using neuromorphic asynchronous event-based cameras. Neural Computation, 27(4), 925–953.MathSciNetCrossRef
Zurück zum Zitat Rebecq, H., Gallego, G., Mueggler, E., & Scaramuzza, D. (2018). EMVS: Event-based multi-view stereo—3D reconstruction with an event camera in real-time. International Journal of Computer Vision, 126(12), 1394–1414.CrossRef Rebecq, H., Gallego, G., Mueggler, E., & Scaramuzza, D. (2018). EMVS: Event-based multi-view stereo—3D reconstruction with an event camera in real-time. International Journal of Computer Vision, 126(12), 1394–1414.CrossRef
Zurück zum Zitat Rebecq, H., Horstschaefer, T., & Scaramuzza, D. (2017). Real-time visual-inertial odometry for event cameras using keyframe-based nonlinear optimization. In British machine vision conference (BMVC). Rebecq, H., Horstschaefer, T., & Scaramuzza, D. (2017). Real-time visual-inertial odometry for event cameras using keyframe-based nonlinear optimization. In British machine vision conference (BMVC).
Zurück zum Zitat Rebecq, H., Horstschäfer, T., Gallego, G., & Scaramuzza, D. (2017). EVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real-time. IEEE Robotics and Automation Letters, 2(2), 593–600.CrossRef Rebecq, H., Horstschäfer, T., Gallego, G., & Scaramuzza, D. (2017). EVO: A geometric approach to event-based 6-DOF parallel tracking and mapping in real-time. IEEE Robotics and Automation Letters, 2(2), 593–600.CrossRef
Zurück zum Zitat Rebecq, H., Ranftl, R., Koltun, V., & Scaramuzza, S. (2019). Events-to-video: Bringing modern computer vision to event cameras. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3857–3866). Rebecq, H., Ranftl, R., Koltun, V., & Scaramuzza, S. (2019). Events-to-video: Bringing modern computer vision to event cameras. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 3857–3866).
Zurück zum Zitat Reinbacher, C., Graber, G., & Pock, T. (2016). Real-time intensity-image reconstruction for event cameras using manifold regularisation. In British machine vision conference (BMVC). Reinbacher, C., Graber, G., & Pock, T. (2016). Real-time intensity-image reconstruction for event cameras using manifold regularisation. In British machine vision conference (BMVC).
Zurück zum Zitat Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. In European conference on computer vision (ECCV) (pp. 430–443). Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. In European conference on computer vision (ECCV) (pp. 430–443).
Zurück zum Zitat Scheerlinck, C., Barnes, N., & Mahony, R. (2018). Continuous-time intensity estimation using event cameras. In Asian conference on computer vision (ACCV). Scheerlinck, C., Barnes, N., & Mahony, R. (2018). Continuous-time intensity estimation using event cameras. In Asian conference on computer vision (ACCV).
Zurück zum Zitat Tedaldi, D., Gallego, G., Mueggler, E., & Scaramuzza, D. (2016). Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS). In International conference on event-based control, communication and signal processing (EBCCSP). Tedaldi, D., Gallego, G., Mueggler, E., & Scaramuzza, D. (2016). Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS). In International conference on event-based control, communication and signal processing (EBCCSP).
Zurück zum Zitat Vasco, V., Glover, A., & Bartolozzi, C. (2016). Fast event-based Harris corner detection exploiting the advantages of event-driven cameras. In IEEE international conference on intelligent robots and systems (IROS). Vasco, V., Glover, A., & Bartolozzi, C. (2016). Fast event-based Harris corner detection exploiting the advantages of event-driven cameras. In IEEE international conference on intelligent robots and systems (IROS).
Zurück zum Zitat Vidal, A. R., Rebecq, H., Horstschaefer, T., & Scaramuzza, D. (2018). Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high speed scenarios. IEEE Robotics and Automation Letters, 3(2), 994–1001.CrossRef Vidal, A. R., Rebecq, H., Horstschaefer, T., & Scaramuzza, D. (2018). Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high speed scenarios. IEEE Robotics and Automation Letters, 3(2), 994–1001.CrossRef
Zurück zum Zitat Zhou, H., Yuan, Y., & Shi, C. (2009). Object tracking using SIFT features and mean shift. Computer Vision and Image Understanding, 113(3), 345–352.CrossRef Zhou, H., Yuan, Y., & Shi, C. (2009). Object tracking using SIFT features and mean shift. Computer Vision and Image Understanding, 113(3), 345–352.CrossRef
Zurück zum Zitat Zhu, A. Z., Atanasov, N., & Daniilidis, K. (2017) Event-based feature tracking with probabilistic data association. In IEEE international conference on robotics and automation (ICRA) (pp. 4465–4470). Zhu, A. Z., Atanasov, N., & Daniilidis, K. (2017) Event-based feature tracking with probabilistic data association. In IEEE international conference on robotics and automation (ICRA) (pp. 4465–4470).
Zurück zum Zitat Zhu, A. Z., Thakur, D., Ozaslan, T., Pfrommer, B., Kumar, V., & Daniilidis, K. (2018). The multivehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robotics and Automation Letters, 3(3), 2032–2039.CrossRef Zhu, A. Z., Thakur, D., Ozaslan, T., Pfrommer, B., Kumar, V., & Daniilidis, K. (2018). The multivehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robotics and Automation Letters, 3(3), 2032–2039.CrossRef
Metadaten
Titel
EKLT: Asynchronous Photometric Feature Tracking Using Events and Frames
verfasst von
Daniel Gehrig
Henri Rebecq
Guillermo Gallego
Davide Scaramuzza
Publikationsdatum
22.08.2019
Verlag
Springer US
Erschienen in
International Journal of Computer Vision / Ausgabe 3/2020
Print ISSN: 0920-5691
Elektronische ISSN: 1573-1405
DOI
https://doi.org/10.1007/s11263-019-01209-w

Weitere Artikel der Ausgabe 3/2020

International Journal of Computer Vision 3/2020 Zur Ausgabe

Premium Partner