skip to main content
research-article

Latency Requirements for Foveated Rendering in Virtual Reality

Published:14 September 2017Publication History
Skip Abstract Section

Abstract

Foveated rendering is a performance optimization based on the well-known degradation of peripheral visual acuity. It reduces computational costs by showing a high-quality image in the user’s central (foveal) vision and a lower quality image in the periphery. Foveated rendering is a promising optimization for Virtual Reality (VR) graphics, and generally requires accurate and low-latency eye tracking to ensure correctness even when a user makes large, fast eye movements such as saccades. However, due to the phenomenon of saccadic omission, it is possible that these requirements may be relaxed.

In this article, we explore the effect of latency for foveated rendering in VR applications. We evaluated the detectability of visual artifacts for three techniques capable of generating foveated images and for three different radii of the high-quality foveal region. Our results show that larger foveal regions allow for more aggressive foveation, but this effect is more pronounced for temporally stable foveation techniques. Added eye tracking latency of 80--150ms causes a significant reduction in acceptable amount of foveation, but a similar decrease in acceptable foveation was not found for shorter eye-tracking latencies of 20--40ms, suggesting that a total system latency of 50--70ms could be tolerated.

Skip Supplemental Material Section

Supplemental Material

References

  1. Robert Scott Allison, Jens Schumacher, Shabnam Sadr, and Rainer Herpers. 2010. Apparent motion during saccadic suppression periods. Experimental Brain Research 202, 1 (2010), 155--169.Google ScholarGoogle ScholarCross RefCross Ref
  2. Stephen J. Anderson, Kathy T. Mullen, and Robert F. Hess. 1991. Human peripheral spatial resolution for achromatic and chromatic stimuli: Limits imposed by optical and retinal factors. The Journal of Physiology 442, 1 (1991), 47--64.Google ScholarGoogle ScholarCross RefCross Ref
  3. A. Terry Bahill, Michael R. Clark, and Lawrence Stark. 1975. The main sequence, a tool for studying human eye movements. Mathematical Biosciences 24, 3--4 (1975), 191--204.Google ScholarGoogle ScholarCross RefCross Ref
  4. Nir Benty. 2016. The Falcor Rendering Framework. Retrieved from https://github.com/NVIDIA/Falcor.Google ScholarGoogle Scholar
  5. Jean-Baptiste Bernard, Scherlen Anne-Catherine, and Castet Eric. 2007. Page mode reading with simulated scotomas: A modest effect of interline spacing on reading speed. Vision Research 47, 28 (2007), 3447--3459.Google ScholarGoogle ScholarCross RefCross Ref
  6. Colin Blakemore. 1970. The range and scope of binocular depth discrimination in man. The Journal of Physiology 211, 3 (1970), 599.Google ScholarGoogle ScholarCross RefCross Ref
  7. Christopher J. Bockisch and Joel M. Miller. 1999. Different motor systems use similar damped extraretinal eye position information. Vision Research 39, 5 (1999), 1025--1038.Google ScholarGoogle ScholarCross RefCross Ref
  8. David C. Burr, M. Concetta Morrone, John Ross, and others. 1994. Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature 371, 6497 (1994), 511--513.Google ScholarGoogle Scholar
  9. Mark R. Diamond, John Ross, and Maria C. Morrone. 2000. Extraretinal control of saccadic suppression. The Journal of Neuroscience 20, 9 (2000), 3449--3455. http://sci-hub.cc http://www.jneurosci.org/content/20/9/3449.short.Google ScholarGoogle ScholarCross RefCross Ref
  10. Michael Dorr and Peter J. Bex. 2011. A gaze-contingent display to study contrast sensitivity under natural viewing conditions. In IS8T/SPIE Electronic Imaging. International Society for Optics and Photonics, 78650Y--78650Y. http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=730733Google ScholarGoogle Scholar
  11. Andrew T. Duchowski and Arzu Çöltekin. 2007. Foveated gaze-contingent displays for peripheral LOD management, 3D visualization, and stereo imaging. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 3, 4 (2007), 6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Andrew T. Duchowski, Nathan Cournia, and Hunter Murphy. 2004. Gaze-contingent displays: A review. CyberPsychology 8 Behavior 7, 6 (2004), 621--634.Google ScholarGoogle Scholar
  13. Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Transactions on Graphics 31, 6 (Nov. 2012), 164:1--164:10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Thorsten Hansen, Lars Pracejus, and Karl R. Gegenfurtner. 2009. Color perception in the intermediate periphery of the visual field. Journal of Vision 9, 4 (2009), 26--26.Google ScholarGoogle ScholarCross RefCross Ref
  15. John M. Henderson, Karen K. McClure, Steven Pierce, and Gary Schrock. 1997. Object identification without foveal vision: Evidence from an artificial scotoma paradigm. Attention, Perception, 8 Psychophysics 59, 3 (1997), 323--346.Google ScholarGoogle Scholar
  16. Cale Hunt. 2016. Field of view face-off: Rift vs Vive vs Gear VR vs PSVR. Retrieved from https://www.vrheads.com/field-view-faceoff-rift-vs-vive-vs-gear-vr-vs-psvr.Google ScholarGoogle Scholar
  17. Michael R. Ibbotson and Shaun L. Cloherty. 2009. Visual perception: Saccadic omission—Suppression or temporal masking? Current Biology 19, 12 (June 2009), R493--R496.Google ScholarGoogle ScholarCross RefCross Ref
  18. Jason Jerald and Mary Whitton. 2009. Relating scene-motion thresholds to latency thresholds for head-mounted displays. In Proceedings of the Virtual Reality Conference (VR ’09). IEEE, 211--218. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Marc Levoy and Ross Whitaker. 1990. Gaze-directed volume rendering. ACM SIGGRAPH Computer Graphics 24, 2 (1990), 217--223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Lester C. Loschky and George W. McConkie. 2000. User performance with gaze contingent multiresolutional displays. In Proceedings of the 2000 Symposium on Eye Tracking Research 8 Applications. ACM, 97--103. http://dl.acm.org/citation.cfm?id=355032 Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. David Luebke and Benjamin Hallen. 2001. Perceptually driven simplification for interactive rendering. In Rendering Techniques 2001. Springer, 223--234. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Katerina Mania, Bernard D. Adelstein, Stephen R. Ellis, and Michael I. Hill. 2004. Perceptual sensitivity to head tracking latency in virtual environments with varying degrees of scene complexity. In Proceedings of the 1st Symposium on Applied Perception in Graphics and Visualization. ACM, 39--47. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Ethel Matin. 1974. Saccadic suppression: A review and an analysis. Psychological Bulletin 81, 12 (1974), 899. http://sci-hub.cchttp://psycnet.apa.org/psycinfo/1975-06562-001Google ScholarGoogle ScholarCross RefCross Ref
  24. George W. McConkie. 1981. Evaluating and reporting data quality in eye movement research. Behavior Research Methods 8 Instrumentation 13, 2 (1981), 97--106.Google ScholarGoogle Scholar
  25. George W. McConkie and Lester C. Loschky. 2002. Perception onset time during fixations in free viewing. Behavior Research Methods, Instruments, 8 Computers 34, 4 (2002), 481--490.Google ScholarGoogle Scholar
  26. Hunter A. Murphy, Andrew T. Duchowski, and Richard A. Tyrrell. 2009. Hybrid image/model-based gaze-contingent rendering. ACM Transactions on Applied Perception (TAP) 5, 4 (2009), 22. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. NVIDIA. 2016. VRWorks—Lens Matched Shading. Retrieved from https://developer.nvidia.com/vrworks/graphics/lensmatchedshading.Google ScholarGoogle Scholar
  28. NVIDIA. 2016. VRWorks—Multi-Res Shading. Retrieved from https://developer.nvidia.com/vrworks/graphics/multiresshading.Google ScholarGoogle Scholar
  29. Toshikazu Ohshima, Hiroyuki Yamamoto, and Hideyuki Tamura. 1996. Gaze-directed adaptive rendering for interacting with virtual space. In Proceedings of the IEEE Virtual Reality Annual International Symposium 1996. IEEE, 103--110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Anjul Patney. 2017. Perceptual insights into foveated virtual reality. In Proceedings of the NVIDIA GPU Technology Conference 2017 Talks. https://gputechconf2017.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=1101958tclass=popup.Google ScholarGoogle Scholar
  31. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. Retrieved from http://research.nvidia.com/sites/default/files/publications/foveated-siga-16-v1-for-web.pdf.Google ScholarGoogle Scholar
  32. Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics 35, 6 (Nov. 2016), 179:1--179:12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Stephen M. Reder. 1973. On-line monitoring of eye-position signals in contingent and noncontingent paradigms. Behavior Research Methods 5, 2 (1973), 218--228. http://www.springerlink.com/index/4413450650Q75507.pdf.Google ScholarGoogle ScholarCross RefCross Ref
  34. Eyal M. Reingold, Lester C. Loschky, George W. McConkie, and David M. Stampe. 2003. Gaze-contingent multiresolutional displays: An integrative review. Human Factors: The Journal of the Human Factors and Ergonomics Society 45, 2 (2003), 307--328. http://hfs.sagepub.com/content/45/2/307.short.Google ScholarGoogle ScholarCross RefCross Ref
  35. William H. Ridder III and Alan Tomlinson. 1997. A comparison of saccadic and blink suppression in normal observers. Vision Research 37, 22 (Nov. 1997), 3171--3179.Google ScholarGoogle Scholar
  36. John Ross, M. Concetta Morrone, Michael E. Goldberg, and David C. Burr. 2001. Changes in visual perception at the time of saccades. Trends in Neurosciences 24, 2 (Feb. 2001), 113--121.Google ScholarGoogle ScholarCross RefCross Ref
  37. Fabrizio Santini, Gabriel Redner, Ramon Iovin, and Michele Rucci. 2005. A general purpose system for eye movement contingent display control. Journal of Vision 5, 8 (2005), 594--594.Google ScholarGoogle ScholarCross RefCross Ref
  38. Daniel R. Saunders and Russell L. Woods. 2014. Direct measurement of the system latency of gaze-contingent displays. Behavior Research Methods 46, 2 (June 2014), 439--447.Google ScholarGoogle ScholarCross RefCross Ref
  39. Heiko H. Schütt, Stefan Harmeling, Jakob H. Macke, and Felix A. Wichmann. 2016. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data. Vision Research 122 (2016), 105--123.Google ScholarGoogle ScholarCross RefCross Ref
  40. Robert Sekuler and Randolph Blake. 1985. Perception. Alfred A. Knopf, New York, NY.Google ScholarGoogle Scholar
  41. John Siderov and Ronald S. Harwerth. 1995. Stereopsis, spatial frequency and retinal eccentricity. Vision Research 35, 16 (1995), 2329--2337.Google ScholarGoogle ScholarCross RefCross Ref
  42. Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016. Adaptive image-space sampling for gaze-c real-time rendering. In Computer Graphics Forum, Vol. 35. Wiley Online Library, 129--139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. L. N. Thibos, F. E. Cheney, and D. J. Walsh. 1987. Retinal limits to the detection and resolution of gratings. Journal of the Optical Society of America A 4, 8 (1987), 1524--1529.Google ScholarGoogle ScholarCross RefCross Ref
  44. Robin Thunström. 2014. Passive Gaze-Contingent Techniques Relation to System Latency. Retrieved from http://www.diva-portal.org/smash/record.jsf?pid=diva2:829880Google ScholarGoogle Scholar
  45. Jochen Triesch, Brian T. Sullivan, Mary M. Hayhoe, and Dana H. Ballard. 2002. Saccade contingent updating in virtual reality. In Proceedings of the 2002 Symposium on Eye Tracking Research 8 Applications. ACM, 95--102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Christopher W. Tyler. 1987. Analysis of visual modulation sensitivity. III. Meridional variations in peripheral flicker sensitivity. Journal of the Optical Society of America A 4, 8 (1987), 1612--1619.Google ScholarGoogle ScholarCross RefCross Ref
  47. A. van der Schaaf and J. H. van Hateren. 1996. Modelling the power spectra of natural images: Statistics and information. Vision Research 36, 17 (1996), 2759--2770.Google ScholarGoogle ScholarCross RefCross Ref
  48. Peter Vincent and Ritchie Brannan. 2017. S7797 Tobii Eye Tracked Foveated Rendering for VR and Desktop. Retrieved from https://gputechconf2017.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=1153608tclass=popup.Google ScholarGoogle Scholar
  49. Alex Vlachos. 2015. Advanced VR Rendering. Retrieved from http://alex.vlachos.com/graphics/Alex_Vlachos_Advanced_VR_Rendering_GDC2015.pdf.Google ScholarGoogle Scholar
  50. Andrew B. Watson. 2014. A formula for human retinal ganglion cell receptive field density as a function of visual field location. Journal of Vision 14, 7 (2014), 15.Google ScholarGoogle ScholarCross RefCross Ref
  51. Nick Whiting and Nick Donaldson. 2016. Lessons from Integrating the Oculus Rift into Unreal Engine 4. Retrieved from http://static.oculus.com/connect/slides/OculusConnect_Epic_UE4_Integration_and_Demos.pdf.Google ScholarGoogle Scholar
  52. Stefan Wiens, Peter Fransson, Thomas Dietrich, Peter Lohmann, Martin Ingvar, and Öhman Arne. 2004. Keeping it short: A comparison of methods for brief picture presentation. Psychological Science 15, 4 (2004), 282--285.Google ScholarGoogle ScholarCross RefCross Ref
  53. Hongbin Zha, Yoshinobu Makimoto, and Tsutomu Hasegawa. 1999. Dynamic gaze-controlled levels of detail of polygonal objects in 3-D environment modeling. In Proceedings of the 2nd International Conference on 3-D Digital Imaging and Modeling, 1999. IEEE, 321--330. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Xin Zhang, Wei Chen, Zhonglei Yang, Chuan Zhu, and Qunsheng Peng. 2011. A new foveation ray casting approach for real-time rendering of 3D scenes. In Proceedings of the 2011 12th International Conference on Computer-Aided Design and Computer Graphics (CAD/Graphics). IEEE, 99--102. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Latency Requirements for Foveated Rendering in Virtual Reality

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image ACM Transactions on Applied Perception
        ACM Transactions on Applied Perception  Volume 14, Issue 4
        Special Issue SAP 2017
        October 2017
        63 pages
        ISSN:1544-3558
        EISSN:1544-3965
        DOI:10.1145/3140462
        Issue’s Table of Contents

        Copyright © 2017 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 14 September 2017
        • Received: 1 July 2017
        • Accepted: 1 July 2017
        Published in tap Volume 14, Issue 4

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Research
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader