skip to main content
research-article

InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation

Authors Info & Claims
Published:11 September 2017Publication History
Skip Abstract Section

Abstract

Analysis of everyday human gaze behaviour has significant potential for ubiquitous computing, as evidenced by a large body of work in gaze-based human-computer interaction, attentive user interfaces, and eye-based user modelling. However, current mobile eye trackers are still obtrusive, which not only makes them uncomfortable to wear and socially unacceptable in daily life, but also prevents them from being widely adopted in the social and behavioural sciences. To address these challenges we present InvisibleEye, a novel approach for mobile eye tracking that uses millimetre-size RGB cameras that can be fully embedded into normal glasses frames. To compensate for the cameras’ low image resolution of only a few pixels, our approach uses multiple cameras to capture different views of the eye, as well as learning-based gaze estimation to directly regress from eye images to gaze directions. We prototypically implement our system and characterise its performance on three large-scale, increasingly realistic, and thus challenging datasets: 1) eye images synthesised using a recent computer graphics eye region model, 2) real eye images recorded of 17 participants under controlled lighting, and 3) eye images recorded of four participants over the course of four recording sessions in a mobile setting. We show that InvisibleEye achieves a top person-specific gaze estimation accuracy of 1.79° using four cameras with a resolution of only 5 × 5 pixels. Our evaluations not only demonstrate the feasibility of this novel approach but, more importantly, underline its significant potential for finally realising the vision of invisible mobile eye tracking and pervasive attentive user interfaces.

References

  1. Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, and others. 2016. Tensorflow: Large-scale Machine Learning on Heterogeneous Distributed Systems. arXiv preprint arXiv:1603.04467 (2016).Google ScholarGoogle Scholar
  2. Evgeniy Abdulin, Ioannis Rigas, and Oleg Komogortsev. 2016. Eye Movement Biometrics on Wearable Devices: What Are the Limits?. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1503--1509. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Deepak Akkil, Jari Kangas, Jussi Rantala, Poika Isokoski, Oleg Spakov, and Roope Raisamo. 2015. Glance Awareness and Gaze Interaction in Smartwatches. In Proc. of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (HCI). ACM, 1271--1276. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Nuri Murat Arar, Hua Gao, and Jean-Philippe Thiran. 2015. Robust Gaze Estimation Based on Adaptive Fusion of Multiple Cameras. In Automatic Face and Gesture Recognition (FG), 2015 11th IEEE International Conference and Workshops on, Vol. 1. IEEE, 1--7.Google ScholarGoogle ScholarCross RefCross Ref
  5. Shumeet Baluja and Dean Pomerleau. 1994. Non-intrusive Gaze Tracking Using Artificial Neural Networks. Technical Report. DTIC Document. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Michael Barz, Florian Daiber, and Andreas Bulling. 2016. Prediction of Gaze Estimation Error for Error-Aware Gaze-Based Interfaces. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). 275--278. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Roman Bednarik, Hana Vrzakova, and Michal Hradis. 2012. What Do You Want to Do Next: A Novel Approach for Intent Prediction in Gaze-based Interaction. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). ACM, 83--90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Frank H Borsato and Carlos H Morimoto. 2016. Episcleral Surface Tracking: Challenges and Possibilities for Using Mice Sensors for Wearable Eye Tracking. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). ACM, 39--46. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Andreas Bulling and Hans Gellersen. 2010. Toward Mobile Eye-Based Human-Computer Interaction. IEEE Pervasive Computing 9, 4 (2010), 8--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Andreas Bulling and Daniel Roggen. 2011. Recognition of Visual Memory Recall Processes Using Eye Movement Analysis. In Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 455--464. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Andreas Bulling, Daniel Roggen, and Gerhard Tröster. 2008. It’s in Your Eyes: Towards Context-awareness and Mobile HCI Using Wearable EOG Goggles. In Proc. of the 10th International Conference on Ubiquitous Computing (UbiComp). ACM, 84--93. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Andreas Bulling, Daniel Roggen, and Gerhard Troster. 2009. Wearable EOG Goggles: Seamless Sensing and Context-awareness in Everyday Environments. Journal of Ambient Intelligence and Smart Environments 1, 2 (2009), 157--171. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Andreas Bulling, Daniel Roggen, and Gerhard Troster. 2011. What’s in the Eyes for Context-Awareness? IEEE Pervasive Computing 10, 2 (April 2011), 48 -- 57. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Andreas Bulling, Jamie A. Ward, Hans Gellersen, and Gerhard Tröster. 2008. Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography. In Proc. International Conference on Pervasive Computing (Pervasive). 19--37. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Andreas Bulling, Jamie A. Ward, Hans Gellersen, and Gerhard Troster. 2009. Eye Movement Analysis for Activity Recognition. In Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 41--50. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Andreas Bulling, Jamie A Ward, Hans Gellersen, and Gerhard Troster. 2011. Eye movement analysis for activity recognition using electrooculography. IEEE transactions on pattern analysis and machine intelligence 33, 4 (2011), 741--753. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Andreas Bulling, Christian Weichel, and Hans Gellersen. 2013. EyeContext: Recognition of High-level Contextual Cues from Human Visual Behaviour. In Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI). 305-308. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Andreas Bulling and Thorsten O. Zander. 2014. Cognition-Aware Computing. IEEE Pervasive Computing 13, 3 (2014), 80--83.Google ScholarGoogle ScholarCross RefCross Ref
  19. François Chollet. 2015. Keras. (2015).Google ScholarGoogle Scholar
  20. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research 12, Jul (2011), 2121--2159. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Augusto Esteves, Eduardo Velloso, Andreas Bulling, and Hans Gellersen. 2015. Orbits: Enabling Gaze Interaction in Smart Watches using Moving Targets. In Proc. ACM Symposium on User Interface Software and Technology (UIST). 457--466. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Wolfgang Fuhl, Thomas Köbler, Katrin Sippel, Wolfgang Rosenstiel, and Enkelejda Kasneci. 2015. ExCuSe: Robust Pupil Detection in Real-World Scenarios. In International Conference on Computer Analysis of Images and Patterns (CAIP). Springer, 39--51.Google ScholarGoogle Scholar
  23. Wolfgang Fuhl, Thiago Santini, Gjergji Kasneci, and Enkelejda Kasneci. 2016. PupilNet: Convolutional Neural Networks for Robust Pupil Detection. arXiv preprint arXiv:1601.04902 (2016).Google ScholarGoogle Scholar
  24. Wolfgang Fuhl, Thiago C Santini, Thomas Kübler, and Enkelejda Kasneci. 2016. ElSe: Ellipse Selection for Robust Pupil Detection in Real-World Environments. In Proc. of the Ninth Biennial ACM Symposium on Eye Tracking Research 8 Applications. ACM, 123--130. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Sabrina Hoppe, Tobias Loetscher, Stephanie Morey, and Andreas Bulling. 2015. Recognition of Curiosity Using Eye Movement Analysis. In Adj. Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 185--188. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Yoshio Ishiguro, Adiyan Mujibiya, Takashi Miyaki, and Jun Rekimoto. 2010. Aided Eyes: Eye Activity Sensing for Daily Life. In Proc. of the 1st Augmented Human International Conference. ACM, 25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Amir-Homayoun Javadi, Zahra Hakimi, Morteza Barati, Vincent Walsh, and Lili Tcheang. 2015. SET: A Pupil Detection Method Using Sinusoidal Approximation. Frontiers in neuroengineering 8 (2015).Google ScholarGoogle Scholar
  28. Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In Adj. Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 1151--1160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Elizabeth S. Kim, Adam Naples, Giuliana Vaccarino Gearty, Quan Wang, Seth Wallace, Carla Wall, Michael Perlmutter, Jennifer Kowitt, Linda Friedlaender, Brian Reichow, Fred Volkmar, and Frederick Shic. 2014. Development of an Untethered, Mobile, Low-cost Head-mounted Eye Tracker. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA ’14). ACM, New York, NY, USA, 247--250. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye Tracking for Everyone. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2176--2184.Google ScholarGoogle ScholarCross RefCross Ref
  31. Christian Lander, Sven Gehring, Antonio Krüger, Sebastian Boring, and Andreas Bulling. 2015. GazeProjector: Accurate Gaze Estimation and Seamless Gaze Interaction Across Multiple Displays. In Proc. ACM Symposium on User Interface Software and Technology (UIST). Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Dongheng Li and Derrick Parkhurst. 2006. Open-Source Software for Real-Time Visible-Spectrum Eye Tracking. In Proc. of the COGAIN Conference, Vol. 17.Google ScholarGoogle Scholar
  33. Dongheng Li, David Winfield, and Derrick J Parkhurst. 2005. Starburst: A Hybrid Algorithm for Video-based Eye Tracking Combining Feature-based and Model-based Approaches. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)-Workshops. IEEE, 79--79. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Xindian Long, Ozan K Tonguz, and Alex Kiderman. 2007. A High Speed Eye Tracking System with Robust Pupil Center Estimation Algorithm. In 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE, 3331--3334.Google ScholarGoogle ScholarCross RefCross Ref
  35. Feng Lu, Yusuke Sugano, Takahiro Okabe, and Yoichi Sato. 2014. Adaptive Linear Regression for Appearance-Based Gaze Estimation. IEEE transactions on pattern analysis and machine intelligence (TPAMI) 36, 10 (2014), 2033--2046.Google ScholarGoogle Scholar
  36. Päivi Majaranta and Andreas Bulling. 2014. Eye Tracking and Eye-Based Human-Computer Interaction. Springer Publishing London, 39--65.Google ScholarGoogle Scholar
  37. Hiroyuki Manabe and Masaaki Fukumoto. 2006. Full-time Wearable Headphone-type Gaze Detector. In CHI’06 Extended Abstracts on Human Factors in Computing Systems. ACM, 1073--1078. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Mohsen Mansouryar, Julian Steil, Yusuke Sugano, and Andreas Bulling. 2016. 3D Gaze Estimation from 2D Pupil Positions on Monocular Head-Mounted Eye Trackers. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). 197--200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Addison Mayberry, Pan Hu, Benjamin Marlin, Christopher Salthouse, and Deepak Ganesan. 2014. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker. In Proceedings of the 12th annual international conference on Mobile systems, applications, and services. ACM, 82--94. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Atsushi Nakazawa and Christian Nitschke. 2012. Point of Gaze Estimation through Corneal Surface Reflection in an Active Illumination Environment. Computer Vision--ECCV 2012 (2012), 159--172.Google ScholarGoogle Scholar
  41. Eleni Nasiopoulos, Evan F Risko, Tom Foulsham, and Alan Kingstone. 2015. Wearable Computing: Will It Make People Prosocial? British Journal of Psychology 106, 2 (2015), 209--216.Google ScholarGoogle ScholarCross RefCross Ref
  42. Basilio Noris, Jean-Baptiste Keller, and Aude Billard. 2011. A Wearable Gaze Tracking System for Children in Unconstrained Environments. Computer Vision and Image Understanding 115, 4 (2011), 476--486. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Bernardo Rodrigues Pires, Michäel Devyver, Akihiro Tsukada, and Takeo Kanade. 2013. Unwrapping the Eye for Visible-spectrum Gaze Tracking on Wearable Devices. In Applications of Computer Vision (WACV), 2013 IEEE Workshop on. IEEE, 369--376. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Alexander Plopski, Christian Nitschke, Kiyoshi Kiyokawa, Dieter Schmalstieg, and Haruo Takemura. 2015. Hybrid Eye Tracking: Combining Iris Contour and Corneal Imaging. In ICAT-EGVE. 183--190. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Evan F Risko and Alan Kingstone. 2011. Eyes Wide Shut: Implied Social Presence, Eye Tracking and Attention. Attention, Perception, 8 Psychophysics 73, 2 (2011), 291--296.Google ScholarGoogle Scholar
  46. Ravikrishna Ruddarraju, Antonio Haro, and Irfan Essa. 2003. Fast Multiple Camera Head Pose Tracking. In Vision Interface, Vol. 2. Citeseer.Google ScholarGoogle Scholar
  47. Ravikrishna Ruddarraju, Antonio Haro, Kris Nagel, Quan T Tran, Irfan A Essa, Gregory Abowd, and Elizabeth D Mynatt. 2003. Perceptual User Interfaces Using Vision-based Eye Tracking. In Proc. of the 5th international conference on Multimodal interfaces. ACM, 227--233. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Javier San Agustin, Henrik Skovsgaard, Emilie Mollenbach, Maria Barret, Martin Tall, Dan Witzner Hansen, and John Paulin Hansen. 2010. Evaluation of a Low-cost Open-source Gaze Tracker. In Proc. of the 2010 Symposium on Eye-Tracking Research 8 Applications. ACM, 77--80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Hosnieh Sattar, Mario Fritz, and Andreas Bulling. 2017. Visual Decoding of Targets During Visual Search From Human Eye Fixations. arXiv:1706.05993.Google ScholarGoogle Scholar
  50. Hosnieh Sattar, Sabine Müller, Mario Fritz, and Andreas Bulling. 2015. Prediction of Search Targets From Fixations in Open-world Settings. In Proc. IEEE International Conference on Computer Vision and Pattern Recognition (CVPR). 981--990.Google ScholarGoogle ScholarCross RefCross Ref
  51. Ricardo Sousa, Martin Wany, Pedro Santos, and Fernando Morgado-Dias. 2017. NanEye--An Endoscopy Sensor with 3D Image Synchronization. IEEE Sensors Journal 17 (2017), 623--631. Issue 3.Google ScholarGoogle ScholarCross RefCross Ref
  52. Julian Steil and Andreas Bulling. 2015. Discovery of Everyday Human Activities From Long-term Visual Behaviour Using Topic Models. In Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 75--85. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Sophie Stellmach and Raimund Dachselt. 2013. Still Looking: Investigating Seamless Gaze-supported Selection, Positioning, and Manipulation of Distant Targets. In Proc. of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 285--294. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Yusuke Sugano and Andreas Bulling. 2015. Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency. In Proc. ACM Symposium on User Interface Software and Technology (UIST). 363--372. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Yusuke Sugano, Yasuyuki Matsushita, and Yoichi Sato. 2014. Learning-by-synthesis for appearance-based 3d gaze estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1821--1828. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Yusuke Sugano, Xucong Zhang, and Andreas Bulling. 2016. Aggregaze: Collective estimation of audience attention on public displays. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 821--831. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Lech Świrski, Andreas Bulling, and Neil Dodgson. 2012. Robust real-time pupil tracking in highly off-axis images. In Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 173--176. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Lech Świrski and Neil Dodgson. 2013. A Fully-automatic, Temporal Approach to Single Camera, Glint-free 3d Eye Model Fitting. Proc. PETMEI (2013).Google ScholarGoogle Scholar
  59. Lech Świrski and Neil Dodgson. 2014. Rendering Synthetic Ground Truth Images for Eye Tracker Evaluation. In Proc. of the Symposium on Eye Tracking Research and Applications. ACM, 219--222. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Marc Tonsen, Xucong Zhang, Yusuke Sugano, and Andreas Bulling. 2016. Labelled pupils in the wild: a dataset for studying pupil detection in unconstrained environments. In Proceedings of the Ninth Biennial ACM Symposium on Eye Tracking Research 8 Applications. ACM, 139--142. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Cihan Topal, Serkan Gunal, Onur Koçdeviren, Atakan Doğan, and Ömer N Gerek. 2014. A Low-Computational Approach on Gaze Estimation With Eye Touch System. IEEE transactions on Cybernetics 44, 2 (2014), 228--239.Google ScholarGoogle Scholar
  62. Akihiro Tsukada, Motoki Shino, Michael Devyver, and Takeo Kanade. 2011. Illumination-Free Gaze Estimation Method for First-Person Vision Wearable Device. In Computer Vision Workshops (ICCV Workshops), 2011 IEEE International Conference on. IEEE, 2084--2091.Google ScholarGoogle Scholar
  63. Jayson Turner, Andreas Bulling, Jason Alexander, and Hans Gellersen. 2014. Cross-Device Gaze-Supported Point-to-Point Content Transfer. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). 19--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Akira Utsumi, Kotaro Okamoto, Norihiro Hagita, and Kazuhiro Takahashi. 2012. Gaze Tracking in Wide Area Using Multiple Camera Observations. In Proc. of the Symposium on Eye Tracking Research and Applications. ACM, 273--276. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Mélodie Vidal, Ken Pfeuffer, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: eye-based interaction with moving targets. In Ext. Abstr. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI). 3147--3150. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Michael Voit and Rainer Stiefelhagen. 2006. Tracking Head Pose and Focus of Attention with Multiple Far-field Cameras. In Proc. of the 8th international conference on Multimodal interfaces. ACM, 281--286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Erroll Wood, Tadas Baltrušaitis, Louis-Philippe Morency, Peter Robinson, and Andreas Bulling. 2016. A 3D Morphable Eye Region Model for Gaze Estimation. In Proc. European Conference on Computer Vision (ECCV).Google ScholarGoogle ScholarCross RefCross Ref
  68. Erroll Wood, Tadas Baltrušaitis, Louis-Philippe Morency, Peter Robinson, and Andreas Bulling. 2016. Learning an Appearance-based Gaze Estimator from One Million Synthesised Images. In Proc. International Symposium on Eye Tracking Research and Applications (ETRA). ACM, 131--138. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Erroll Wood, Tadas Baltrusaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas Bulling. 2015. Rendering of Eyes for Eye-Shape Registration and Gaze Estimation. In Proc. of the IEEE International Conference on Computer Vision (ICCV). 3756--3764. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Gregory J Zelinsky, Hossein Adeli, Yifan Peng, and Dimitris Samaras. 2013. Modelling eye movements in a categorical search task. Philosophical Transactions of the Royal Society B: Biological Sciences 368, 1628 (2013), 20130058.Google ScholarGoogle ScholarCross RefCross Ref
  71. Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2015. Appearance-based gaze estimation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 4511--4520.Google ScholarGoogle ScholarCross RefCross Ref
  72. Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2017. It’s Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. In Proc. IEEE International Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).Google ScholarGoogle ScholarCross RefCross Ref
  73. Yanxia Zhang, Hans Jörg Müller, Ming Ki Chong, Andreas Bulling, and Hans Gellersen. 2014. GazeHorizon: Enabling Passers-by to Interact with Public Displays by Gaze. In Proc. ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp). 559--563. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. InvisibleEye: Mobile Eye Tracking Using Multiple Low-Resolution Cameras and Learning-Based Gaze Estimation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
          Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 1, Issue 3
          September 2017
          2023 pages
          EISSN:2474-9567
          DOI:10.1145/3139486
          Issue’s Table of Contents

          Copyright © 2017 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 11 September 2017
          • Accepted: 1 July 2017
          • Received: 1 May 2017
          Published in imwut Volume 1, Issue 3

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader