- 1.D.H. Ballard, M.M. Hayhoe, P.K. Pook, and R.P.N. Rao. Deictic codes for the embodiment of cognition. Behavioral and Brain Sciences, 20: 723-767, 1997.Google ScholarCross Ref
- 2.G.T. Buswell. How People Look At Pictures. Chicago, University of Chicago Press. 1935.Google Scholar
- 3.J.M. Findlay and R. Walker. A model of saccade generation based on parallel processing and competitive inhibition. Behavioral and Brain Sciences, 22: 348-362, 1999.Google ScholarCross Ref
- 4.B. Fischer and E. Ramsberger. Human express saccades: extremely short reaction times of goal directed eye movements. Experimental Brain Research, 57, 191-195, 1984.Google ScholarCross Ref
- 5.S. Gezeck, and J. Timmer. Detecting multimodality in saccadic reaction time distributions in gap and overlap tasks. Biological Cybernetics, 87: 293-305, 1998.Google ScholarCross Ref
- 6.J. E Hoffman. Stage of processing in visual search and attention. In B.H.Challis and B.M. Velichkovsky (Eds.), Stratification in Cognition and Consciousness, pp. 43-71, Amsterdam/Philadelphia: John Benjamins. 1999.Google ScholarCross Ref
- 7.E. Kowler. Eye movements. In S. Kosslyn and D.N. Osherson (Eds.), Visual Cognition, pp. 214- 265, Camgbridge, MA: MIT Press. 1995.Google Scholar
- 8.D.P. Munoz and R.H. Wurtz. Fixation cells in monkey superior coUiculus I: Characteristics of cell discharge. Journal of Neurophysiology, 70: 559-575. 1993.Google ScholarCross Ref
- 9.M. Pomphin. Analysis and Models of Comparative Visual Search. Aachen: Cuvilier. 1998.Google Scholar
- 10.R. Ulrich and J. Miller. Information processing models generating lognormally distributed reaction times. Journal of Mathematical Psychology, 37: 513-525. 1993. Google ScholarDigital Library
- 11.P. Unema. Eye Movements and Mental Effort. Aachen: Shaker. 1995.Google Scholar
- 12.P. Unema and B.M. Velichkovsky. Processing Stages as Revealed by Dynamics of Visual Fixations: Distractor versus Relevance Effects. Paper presented to the 41st Annual Meeting of Psychonomic Society. 2000.Google Scholar
- 13.T. Van-Zandt and R. Ratcliff. Statistical mimicking of reaction time data: Single-process models, parameter variability, and mixtures. Psychonomic Bulletin andReview, 2: 20-54, 1995.Google ScholarCross Ref
- 14.B.M. Velichkovsky. The vertical dimension mental functioning. Psychological Research, 52: 282-289. 1990.Google ScholarCross Ref
- 15.B.M. Velichkovsky. Communicating attention: Gaze position transfer in cooperative problem solving. Pragmatics and Cognition, 3(2): 199-222. 1995.Google ScholarCross Ref
- 16.B.M. Velichkovsky. From levels of processing stratification of cognition. In B.H.Challis B.M.Velichkovsky 0Eds.), Stratification in Cognition and Consciousness. Amsterdam/Philadelphia: John Benjamins. 1999,Google Scholar
- 17.B.M. Velichkovsky and J.P. Hansen. New technological windows into mind: There is more eyes and brains for human-computer intemction.(Keynote address) In CHI-96: Human Factors in Computing Systems. New York: ACM Press. 1996. Google ScholarDigital Library
- 18.B.M. Velichkovsky, B.H. Challis and M. Pomplun. Arbeitsged~ichtnis und Arbeit mit dem Ged~chtnis: Visuell-faumliche und weitere Komponenten der Verarbeitung {Working memory and work with memory: Visual-spatial and further components of processing}. Zeitschrift far experimentelle Psychologie, 42: 672-701. 1995.Google Scholar
- 19.B.M Velichkovsky, A. Sprenger and P. Unema, Towards gaze-mediated interaction: Collecting solutions of the "Midas Touch Problem" S.Howard, J.Hammond & G. Lindgaard (Eds.), Human-Computer Interaction: INTERACT'97 (Sydney, July 14-19th), Chapman & Hall, London. 1997. Google ScholarDigital Library
Index Terms
- Visual fixations and level of attentional processing
Recommendations
Predicting Eye Fixations With Higher-Level Visual Features
Saliency map and object map are the two contrasting hypotheses for the mechanisms utilized by the visual system to guide eye fixations when humans are freely viewing natural images. Most computational studies define saliency as outliers of distributions ...
Dual attentional transformer for video visual relation prediction
AbstractVideo visual relation detecti on (VidVRD) is to detect visual relations among instances as well as the trajectories of the corresponding subjects and objects in the video. Most current works improve the accuracy of tracking the objects ...
Attentional gradient for crossmodal proximal-distal tactile cueing of visual spatial attention
Past studies have established a crossmodal spatial attentional link among vision, audition, and touch. The present study examined the dependence of visual attention on the distance between a distal visual target (a changing element among static ...
Comments