Abstract
Foveated rendering is a performance optimization based on the well-known degradation of peripheral visual acuity. It reduces computational costs by showing a high-quality image in the user’s central (foveal) vision and a lower quality image in the periphery. Foveated rendering is a promising optimization for Virtual Reality (VR) graphics, and generally requires accurate and low-latency eye tracking to ensure correctness even when a user makes large, fast eye movements such as saccades. However, due to the phenomenon of saccadic omission, it is possible that these requirements may be relaxed.
In this article, we explore the effect of latency for foveated rendering in VR applications. We evaluated the detectability of visual artifacts for three techniques capable of generating foveated images and for three different radii of the high-quality foveal region. Our results show that larger foveal regions allow for more aggressive foveation, but this effect is more pronounced for temporally stable foveation techniques. Added eye tracking latency of 80--150ms causes a significant reduction in acceptable amount of foveation, but a similar decrease in acceptable foveation was not found for shorter eye-tracking latencies of 20--40ms, suggesting that a total system latency of 50--70ms could be tolerated.
Supplemental Material
Available for Download
Supplemental movie, appendix, image and software files for, Latency Requirements for Foveated Rendering in Virtual Reality
- Robert Scott Allison, Jens Schumacher, Shabnam Sadr, and Rainer Herpers. 2010. Apparent motion during saccadic suppression periods. Experimental Brain Research 202, 1 (2010), 155--169.Google ScholarCross Ref
- Stephen J. Anderson, Kathy T. Mullen, and Robert F. Hess. 1991. Human peripheral spatial resolution for achromatic and chromatic stimuli: Limits imposed by optical and retinal factors. The Journal of Physiology 442, 1 (1991), 47--64.Google ScholarCross Ref
- A. Terry Bahill, Michael R. Clark, and Lawrence Stark. 1975. The main sequence, a tool for studying human eye movements. Mathematical Biosciences 24, 3--4 (1975), 191--204.Google ScholarCross Ref
- Nir Benty. 2016. The Falcor Rendering Framework. Retrieved from https://github.com/NVIDIA/Falcor.Google Scholar
- Jean-Baptiste Bernard, Scherlen Anne-Catherine, and Castet Eric. 2007. Page mode reading with simulated scotomas: A modest effect of interline spacing on reading speed. Vision Research 47, 28 (2007), 3447--3459.Google ScholarCross Ref
- Colin Blakemore. 1970. The range and scope of binocular depth discrimination in man. The Journal of Physiology 211, 3 (1970), 599.Google ScholarCross Ref
- Christopher J. Bockisch and Joel M. Miller. 1999. Different motor systems use similar damped extraretinal eye position information. Vision Research 39, 5 (1999), 1025--1038.Google ScholarCross Ref
- David C. Burr, M. Concetta Morrone, John Ross, and others. 1994. Selective suppression of the magnocellular visual pathway during saccadic eye movements. Nature 371, 6497 (1994), 511--513.Google Scholar
- Mark R. Diamond, John Ross, and Maria C. Morrone. 2000. Extraretinal control of saccadic suppression. The Journal of Neuroscience 20, 9 (2000), 3449--3455. http://sci-hub.cc http://www.jneurosci.org/content/20/9/3449.short.Google ScholarCross Ref
- Michael Dorr and Peter J. Bex. 2011. A gaze-contingent display to study contrast sensitivity under natural viewing conditions. In IS8T/SPIE Electronic Imaging. International Society for Optics and Photonics, 78650Y--78650Y. http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=730733Google Scholar
- Andrew T. Duchowski and Arzu Çöltekin. 2007. Foveated gaze-contingent displays for peripheral LOD management, 3D visualization, and stereo imaging. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM) 3, 4 (2007), 6. Google ScholarDigital Library
- Andrew T. Duchowski, Nathan Cournia, and Hunter Murphy. 2004. Gaze-contingent displays: A review. CyberPsychology 8 Behavior 7, 6 (2004), 621--634.Google Scholar
- Brian Guenter, Mark Finch, Steven Drucker, Desney Tan, and John Snyder. 2012. Foveated 3D graphics. ACM Transactions on Graphics 31, 6 (Nov. 2012), 164:1--164:10. Google ScholarDigital Library
- Thorsten Hansen, Lars Pracejus, and Karl R. Gegenfurtner. 2009. Color perception in the intermediate periphery of the visual field. Journal of Vision 9, 4 (2009), 26--26.Google ScholarCross Ref
- John M. Henderson, Karen K. McClure, Steven Pierce, and Gary Schrock. 1997. Object identification without foveal vision: Evidence from an artificial scotoma paradigm. Attention, Perception, 8 Psychophysics 59, 3 (1997), 323--346.Google Scholar
- Cale Hunt. 2016. Field of view face-off: Rift vs Vive vs Gear VR vs PSVR. Retrieved from https://www.vrheads.com/field-view-faceoff-rift-vs-vive-vs-gear-vr-vs-psvr.Google Scholar
- Michael R. Ibbotson and Shaun L. Cloherty. 2009. Visual perception: Saccadic omission—Suppression or temporal masking? Current Biology 19, 12 (June 2009), R493--R496.Google ScholarCross Ref
- Jason Jerald and Mary Whitton. 2009. Relating scene-motion thresholds to latency thresholds for head-mounted displays. In Proceedings of the Virtual Reality Conference (VR ’09). IEEE, 211--218. Google ScholarDigital Library
- Marc Levoy and Ross Whitaker. 1990. Gaze-directed volume rendering. ACM SIGGRAPH Computer Graphics 24, 2 (1990), 217--223. Google ScholarDigital Library
- Lester C. Loschky and George W. McConkie. 2000. User performance with gaze contingent multiresolutional displays. In Proceedings of the 2000 Symposium on Eye Tracking Research 8 Applications. ACM, 97--103. http://dl.acm.org/citation.cfm?id=355032 Google ScholarDigital Library
- David Luebke and Benjamin Hallen. 2001. Perceptually driven simplification for interactive rendering. In Rendering Techniques 2001. Springer, 223--234. Google ScholarDigital Library
- Katerina Mania, Bernard D. Adelstein, Stephen R. Ellis, and Michael I. Hill. 2004. Perceptual sensitivity to head tracking latency in virtual environments with varying degrees of scene complexity. In Proceedings of the 1st Symposium on Applied Perception in Graphics and Visualization. ACM, 39--47. Google ScholarDigital Library
- Ethel Matin. 1974. Saccadic suppression: A review and an analysis. Psychological Bulletin 81, 12 (1974), 899. http://sci-hub.cchttp://psycnet.apa.org/psycinfo/1975-06562-001Google ScholarCross Ref
- George W. McConkie. 1981. Evaluating and reporting data quality in eye movement research. Behavior Research Methods 8 Instrumentation 13, 2 (1981), 97--106.Google Scholar
- George W. McConkie and Lester C. Loschky. 2002. Perception onset time during fixations in free viewing. Behavior Research Methods, Instruments, 8 Computers 34, 4 (2002), 481--490.Google Scholar
- Hunter A. Murphy, Andrew T. Duchowski, and Richard A. Tyrrell. 2009. Hybrid image/model-based gaze-contingent rendering. ACM Transactions on Applied Perception (TAP) 5, 4 (2009), 22. Google ScholarDigital Library
- NVIDIA. 2016. VRWorks—Lens Matched Shading. Retrieved from https://developer.nvidia.com/vrworks/graphics/lensmatchedshading.Google Scholar
- NVIDIA. 2016. VRWorks—Multi-Res Shading. Retrieved from https://developer.nvidia.com/vrworks/graphics/multiresshading.Google Scholar
- Toshikazu Ohshima, Hiroyuki Yamamoto, and Hideyuki Tamura. 1996. Gaze-directed adaptive rendering for interacting with virtual space. In Proceedings of the IEEE Virtual Reality Annual International Symposium 1996. IEEE, 103--110. Google ScholarDigital Library
- Anjul Patney. 2017. Perceptual insights into foveated virtual reality. In Proceedings of the NVIDIA GPU Technology Conference 2017 Talks. https://gputechconf2017.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=1101958tclass=popup.Google Scholar
- Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. Retrieved from http://research.nvidia.com/sites/default/files/publications/foveated-siga-16-v1-for-web.pdf.Google Scholar
- Anjul Patney, Marco Salvi, Joohwan Kim, Anton Kaplanyan, Chris Wyman, Nir Benty, David Luebke, and Aaron Lefohn. 2016. Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on Graphics 35, 6 (Nov. 2016), 179:1--179:12. Google ScholarDigital Library
- Stephen M. Reder. 1973. On-line monitoring of eye-position signals in contingent and noncontingent paradigms. Behavior Research Methods 5, 2 (1973), 218--228. http://www.springerlink.com/index/4413450650Q75507.pdf.Google ScholarCross Ref
- Eyal M. Reingold, Lester C. Loschky, George W. McConkie, and David M. Stampe. 2003. Gaze-contingent multiresolutional displays: An integrative review. Human Factors: The Journal of the Human Factors and Ergonomics Society 45, 2 (2003), 307--328. http://hfs.sagepub.com/content/45/2/307.short.Google ScholarCross Ref
- William H. Ridder III and Alan Tomlinson. 1997. A comparison of saccadic and blink suppression in normal observers. Vision Research 37, 22 (Nov. 1997), 3171--3179.Google Scholar
- John Ross, M. Concetta Morrone, Michael E. Goldberg, and David C. Burr. 2001. Changes in visual perception at the time of saccades. Trends in Neurosciences 24, 2 (Feb. 2001), 113--121.Google ScholarCross Ref
- Fabrizio Santini, Gabriel Redner, Ramon Iovin, and Michele Rucci. 2005. A general purpose system for eye movement contingent display control. Journal of Vision 5, 8 (2005), 594--594.Google ScholarCross Ref
- Daniel R. Saunders and Russell L. Woods. 2014. Direct measurement of the system latency of gaze-contingent displays. Behavior Research Methods 46, 2 (June 2014), 439--447.Google ScholarCross Ref
- Heiko H. Schütt, Stefan Harmeling, Jakob H. Macke, and Felix A. Wichmann. 2016. Painfree and accurate Bayesian estimation of psychometric functions for (potentially) overdispersed data. Vision Research 122 (2016), 105--123.Google ScholarCross Ref
- Robert Sekuler and Randolph Blake. 1985. Perception. Alfred A. Knopf, New York, NY.Google Scholar
- John Siderov and Ronald S. Harwerth. 1995. Stereopsis, spatial frequency and retinal eccentricity. Vision Research 35, 16 (1995), 2329--2337.Google ScholarCross Ref
- Michael Stengel, Steve Grogorick, Martin Eisemann, and Marcus Magnor. 2016. Adaptive image-space sampling for gaze-c real-time rendering. In Computer Graphics Forum, Vol. 35. Wiley Online Library, 129--139. Google ScholarDigital Library
- L. N. Thibos, F. E. Cheney, and D. J. Walsh. 1987. Retinal limits to the detection and resolution of gratings. Journal of the Optical Society of America A 4, 8 (1987), 1524--1529.Google ScholarCross Ref
- Robin Thunström. 2014. Passive Gaze-Contingent Techniques Relation to System Latency. Retrieved from http://www.diva-portal.org/smash/record.jsf?pid=diva2:829880Google Scholar
- Jochen Triesch, Brian T. Sullivan, Mary M. Hayhoe, and Dana H. Ballard. 2002. Saccade contingent updating in virtual reality. In Proceedings of the 2002 Symposium on Eye Tracking Research 8 Applications. ACM, 95--102. Google ScholarDigital Library
- Christopher W. Tyler. 1987. Analysis of visual modulation sensitivity. III. Meridional variations in peripheral flicker sensitivity. Journal of the Optical Society of America A 4, 8 (1987), 1612--1619.Google ScholarCross Ref
- A. van der Schaaf and J. H. van Hateren. 1996. Modelling the power spectra of natural images: Statistics and information. Vision Research 36, 17 (1996), 2759--2770.Google ScholarCross Ref
- Peter Vincent and Ritchie Brannan. 2017. S7797 Tobii Eye Tracked Foveated Rendering for VR and Desktop. Retrieved from https://gputechconf2017.smarteventscloud.com/connect/sessionDetail.ww?SESSION_ID=1153608tclass=popup.Google Scholar
- Alex Vlachos. 2015. Advanced VR Rendering. Retrieved from http://alex.vlachos.com/graphics/Alex_Vlachos_Advanced_VR_Rendering_GDC2015.pdf.Google Scholar
- Andrew B. Watson. 2014. A formula for human retinal ganglion cell receptive field density as a function of visual field location. Journal of Vision 14, 7 (2014), 15.Google ScholarCross Ref
- Nick Whiting and Nick Donaldson. 2016. Lessons from Integrating the Oculus Rift into Unreal Engine 4. Retrieved from http://static.oculus.com/connect/slides/OculusConnect_Epic_UE4_Integration_and_Demos.pdf.Google Scholar
- Stefan Wiens, Peter Fransson, Thomas Dietrich, Peter Lohmann, Martin Ingvar, and Öhman Arne. 2004. Keeping it short: A comparison of methods for brief picture presentation. Psychological Science 15, 4 (2004), 282--285.Google ScholarCross Ref
- Hongbin Zha, Yoshinobu Makimoto, and Tsutomu Hasegawa. 1999. Dynamic gaze-controlled levels of detail of polygonal objects in 3-D environment modeling. In Proceedings of the 2nd International Conference on 3-D Digital Imaging and Modeling, 1999. IEEE, 321--330. Google ScholarDigital Library
- Xin Zhang, Wei Chen, Zhonglei Yang, Chuan Zhu, and Qunsheng Peng. 2011. A new foveation ray casting approach for real-time rendering of 3D scenes. In Proceedings of the 2011 12th International Conference on Computer-Aided Design and Computer Graphics (CAD/Graphics). IEEE, 99--102. Google ScholarDigital Library
Index Terms
- Latency Requirements for Foveated Rendering in Virtual Reality
Recommendations
User, metric, and computational evaluation of foveated rendering methods
SAP '16: Proceedings of the ACM Symposium on Applied PerceptionPerceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the user's perception of a full quality render. We ...
Perceptually-based foveated virtual reality
SIGGRAPH '16: ACM SIGGRAPH 2016 Emerging TechnologiesHumans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, ...
Kernel Foveated Rendering
Foveated rendering coupled with eye-tracking has the potential to dramatically accelerate interactive 3D graphics with minimal loss of perceptual detail. In this paper, we parameterize foveated rendering by embedding polynomial kernel functions in the ...
Comments