Review
Imagery and spatial processes in blindness and visual impairment

https://doi.org/10.1016/j.neubiorev.2008.05.002Get rights and content

Abstract

The objective of this review is to examine and evaluate recent findings on cognitive functioning (in particular imagery processes) in individuals with congenital visual impairments, including total blindness, low-vision and monocular vision. As one might expect, the performance of blind individuals in many behaviours and tasks requiring imagery can be inferior to that of sighted subjects; however, surprisingly often this is not the case. Interestingly, there is evidence that the blind often employ different cognitive mechanisms than sighted subjects, suggesting that compensatory mechanisms can overcome the limitations of sight loss. Taken together, these studies suggest that the nature of perceptual input on which we commonly rely strongly affects the organization of our mental processes. We also review recent neuroimaging studies on the neural correlates of sensory perception and mental imagery in visually impaired individuals that have cast light on the plastic functional reorganization mechanisms associated with visual deprivation.

Introduction

Mental imagery is a critical function of human cognition and it is often regarded as a “quasi-perceptual” experience of objects in their physical absence. Although mental imagery might contain the characteristics of any sensory modality – e.g., auditory, haptic, kinesthetic, olfactory (e.g., Bensafi et al., 2003, Eardley and Pring, 2006a, Reisberg, 1992, Segal and Fusella, 1971) – vision is likely to represent its basic reference (Kosslyn, 1994); ‘seeing in the mind's eye’ or ‘having a picture in one's head’ are all expressions commonly used to define this kind of imagery. Visual imagery and visual perception share common functional mechanisms, as demonstrated both at the behavioural (e.g., Brockmole et al., 2003, Craver-Lemley and Reeves, 1992, Kosslyn et al., 1999a) and functional level (e.g., Kosslyn et al., 1995, O’Craven and Kanwisher, 2000, Roland and Gulyás, 1995). This overlap raises questions regarding the nature of mental imagery in individuals who have never seen or have always had a reduced visual capacity.

In this review we summarize the findings of studies that have assessed the cognitive and functional features of individuals who have been affected since birth by visual impairments of varying severity; that is, by total blindness, low-vision or monocular vision (i.e., normal sight but through one eye only). Specifically, we review studies dealing with visual deficits due to peripheral (ocular) causes; discussion of cortical blindness is beyond the scope of the present review. Furthermore, we specifically focus on congenital and early visual impairment.

The first section of the review examines the performance of blind individuals in various mental tasks. This is followed by a discussion of the perceptual and cognitive capacities of individuals with low-vision or monocular vision. Finally, we present evidence from recent brain imaging studies that support the view of a supramodal brain.

Several neuroimaging studies have shown that the cortical areas supporting visual perception and imagery are analogous (e.g., Chen et al., 1998, Ishai et al., 2000, Kaski, 2002, Kosslyn et al., 1997, Kosslyn et al., 1999b). As a congenitally blind person has never received external visual stimulation, neuroimaging studies might thus lead us to conclude that a congenitally blind person is not able to mentally generate a visual representation. However, there is evidence that blind individuals do have mental imagery, and furthermore, the mechanisms underlying imagery in the sighted and the congenitally blind are not believed to be too dissimilar (e.g., Aleman et al., 2001, Kerr, 1983, Vecchi, 1998, Zimler and Keenan, 1983).

Moreover, the relationship between perception and imagery is much less straightforward than one might expect. Firstly, visual cortical areas serving visual imagery and those serving visual perception do not completely overlap (e.g., D’Esposito et al., 1997). For instance, using functional magnetic resonance imaging (fMRI), Knauff et al. (2000) demonstrated that the primary visual cortex which is critical for visual perception is not directly involved in visual mental imagery in sighted individuals. Instead, visual imagery seems to be mediated by the activation of a network of spatial subsystems and higher-order visual areas (Knauff et al., 2000). Secondly, visual cortical areas can be activated by objects presented in non-visual sensory modalities, as, for instance, touch (e.g. Pietrini et al., 2004). Thirdly, in presence of a severe visual deprivation, robust reorganization phenomena usually take place in the brain, so that cortical areas originally devoted to process visual information are largely recruited by other sensory modalities (e.g., D’Angiulli and Waraich, 2002). These findings are consistent with the view that mental images are likely to be the end product of a series of constructive processes using different sources of information rather then mere copies of a perceptual input (e.g., Cornoldi et al., 1998).

A number of studies have investigated mental imagery in blind individuals. One approach has focused on the use of basic mental imagery paradigms where visual imagery is considered to be critically involved; a second approach has focused on the development of spatial knowledge in blind humans.

There is ample evidence to indicate that the performance of blind individuals is remarkably similar to that of sighted individuals in tasks presumed to involve visual imagery. Similarly to normally sighted individuals, the blind tend to have a superior memory for concrete imageable words than for abstract words (e.g., Cornoldi et al., 1979). Furthermore, “imagery” instructions facilitate word retrieval in both the sighted and the blind (e.g., Cornoldi et al., 1989, Jonides et al., 1975). Interestingly, the representation of colours might be present to some extent in blind individuals (Marmor, 1978) and accordingly, colour names serve as effective clustering categories for both blind and sighted subjects (Zimler and Keenan, 1983).

However, similar levels of performance can be achieved in imagery tasks by relying on different cognitive strategies: hence, in the absence of vision, congenitally blind individuals may rely more on verbal/semantic, haptic or purely spatial (i.e., without a visual content) representations. For example, colour representation in the blind may also be categorical and associated with abstract knowledge, as suggested by the fact that blind individuals are more prototypical in defining the colour of objects (Zimler and Keenan, 1983). Similarly, the effect of imaginability of the item may reflect not only specific imagery processes but also a series of different semantic processes (Marschark and Cornoldi, 1991, for a review). Blind individuals are not affected by the “visual-impedance-effect” (i.e. the interfering effect of the activation of irrelevant visual images on reasoning processes) (Knauff and Johnson-Laird, 2002), suggesting that they may rely predominantly on non-visual types of mental representations (Knauff and May, 2006). Similarly, verbally induced visual illusions in imagery are not found in congenitally blind people (Renier et al., 2006).

It seems that a strategy based on haptic imagery (i.e., mental representations generated on the basis of previous haptic experience) can be almost as accurate as a strategy based on visual imagery in many different cognitive tasks. For instance, Noordzij et al. (2007) compared the performance of early blind and sighted subjects in a form comparison task in which subjects had to imagine the outlines of three named objects and to indicate the odd-one-out (i.e., the most deviant in terms of form characteristics). Interestingly, they did not find differences between early blind and sighted controls. It seems that the early blind individuals were relying on haptic imagery in performing this task, and that the haptic strategy was as accurate as the visual imagery strategy adopted by the sighted controls (see also Aleman et al., 2001, Postma et al., 2007).

These findings support a growing body of evidence suggesting that blind individuals can compensate for their visual deficit by relying on experience in other sensory domains. In fact, it is likely that blind individuals compensate for the lack of vision both at a perceptual level, by enhancing their auditory capacities (Röder et al., 1999, Röder et al., 2000), and at a higher cognitive level, by developing conceptual networks with more acoustic and tactile nodes (Röder and Rösler, 2003), thus contradicting the view that semantic networks are less elaborate in congenitally blind individuals (Pring, 1988). In line with this, a recent experiment by Röder and Rösler (2003) found that in an auditory recognition task designed to assess the effect of physical and semantic encoding on memory for sounds, blind participants outperformed sighted ones under both encoding conditions. Moreover, several studies have shown a superior performance by blind compared to sighted subjects both in auditory digit tasks, word span tests (e.g., Hull and Mason, 1995, Röder and Neville, 2003) and in the long-term memory for voices and verbal material (e.g., Bull et al., 1983; Röder and Neville, 2003, Röder et al., 2001; but see Miller, 1992). Furthermore, it has been shown that the congenitally blind have superior tactile acuity (Goldreich and Kanics, 2006) and are better at auditory localization (Lessard et al., 1998, Röder et al., 1999) than sighted individuals. Interestingly, these compensations can be specific for the “where” and “what” pathways (e.g., Ungerleider and Haxby, 1994): in fact, Chen et al. (2006) have recently reported that while early vision deprivation may enhance blind people's auditory processing along the “where” pathway, this does not seem to be the case for the “what” pathway. In fact, blind individuals are significantly faster than sighted controls at detecting and localizing an auditory stimulus, but are significantly slower at discriminating the frequency (“what”) of the target (Chen et al., 2006).

An interesting line of research has recently investigated the representations of numbers in congenitally/early blind individuals. Castronovo and Seron, 2007a, Castronovo and Seron, 2007b observed that blind and sighted individuals behaved in a similar way on a numerical comparisons task in the auditory modality. This finding is interesting in the context of behavioural and neuroimaging data indicating a close connection between representations of numbers and space (for a review, Hubbard et al., 2005). The fact that blind individuals demonstrate the classic properties of normal number representation – such as the “distance” effect (i.e., longer reaction times when discriminating numerically close quantities compared to more distant quantities) (Castronovo and Seron, 2007a, Szucs and Csépe, 2005), the “SNARC” effect (i.e., responses to small numbers are faster on the left side, while responses to large numbers are faster on the right side) (Castronovo and Seron, 2007a), and the obedience to Weber's law (i.e., with increasing target magnitudes, numerical processing becomes more imprecise) (Castronovo and Seron, 2007b) – suggest that semantic numerical representation can develop in a spatial format even in absence of vision. That is, the lack of vision since birth or early childhood does not preclude the development of a mental left-to-right oriented continuum to represent small and large numbers. Accordingly, electro-encephalography (EEG) data have demonstrated that the numerical distance effect has similar parietal correlates in sighted and blind individuals (Szucs and Csépe, 2005).

Spatial cognition allows us to orient in and interact with our environment. Vision is usually the primary sensory modality of spatial cognition and object recognition (e.g., Eimer, 2004, Pick et al., 1969). In fact, the visual system allows the acquisition of highly detailed spatial information that cannot be obtained by other sensory modalities such as touch or hearing. It is likely that vision plays an important role in representing spatial information in other modalities (Pick, 1974, Pick et al., 1969). For instance, the generation of spatial mental representations of the environment on the basis of an auditory experience is likely to require the calibration of the visual system to work efficiently (Zwiers et al., 2001). Tactile stimuli are usually remapped in sighted people into externally defined coordinates mainly modulated by visual inputs (Röder et al., 2004; see also Hötting et al., 2004) and a normal vision provides “by default” an external coordinate frame for multisensory (e.g., auditory-manual) action control (Röder et al., 2007).

The loss of vision is therefore expected to have a major impact on spatial cognition. Research on spatial cognition in congenitally totally blind individuals has a longstanding tradition. The results of many behavioural studies indicate that vision is necessary for the acquisition of spatial knowledge (e.g., Byrne and Salter, 1983, Gaunet et al., 1997, Rieser et al., 1986, Rieser et al., 1992; see Kitchin and Jacobson, 1997, for a review). However, there is also evidence to suggest that in absence of vision the acquisition of spatial knowledge is not completely absent (e.g., Hill et al., 1993, Juurmaa and Lehtinen-Railo, 1994, Millar, 1994, Passini and Proulx, 1988, Thinus-Blanc and Gaunet, 1997). Indeed, although blind people have to rely exclusively on auditory and haptic cues to activate spatial concepts, they can perform similarly (or sometimes even better) than blindfolded sighted individuals in many spatial tasks (e.g., Morrongiello et al., 1995). As it was discussed above, this may depend on specific compensatory mechanisms that intervene in case of a congenital sensory deficit. The magnitude of the compensation may depend on the type and extent of the visual deficit (Despres et al., 2005, Dufour and Gérard, 2000).

However, learned sensory substitution may not always be sufficient to achieve a normal level of performance in spatial tasks. For example, visual experience seems to be necessary for efficient representation and updating of haptic spatial information (Pasqualotto and Newell, 2007). In particular, Pasqualotto and Newell (2007) found that congenitally blind participants found it difficult to spatially update a haptically learned scene as they moved around, supporting the important role of visual experience in spatial understanding.

Some recent studies have specifically investigated navigation abilities in congenitally blind individuals revealing that lack of vision induces the generation of a “route-like” (rather than “survey-like”) representation of the environment. For instance, Noordzij et al. (2006) tested the ability to form a spatial mental model on the basis of survey-like and route-like descriptions. Blind people performed better after listening to a route-like than a survey-like description while the opposite pattern was found in the sighted. Furthermore, Rieser et al. (1980) reported that blind individuals found difficulties in estimating Euclidean distances between locations whereas they encountered less difficulty in estimating functional distance (in units such as paces). This finding might depend on the specific difficulty in generating a survey-like mental representation of the environment: accordingly, blind participants were found to be less accurate than blindfolded sighted controls in pointing to locations in a room (a task that required generating a perspective structure of the layout) (Rieser et al., 1986).

However, blind individuals may be able to generate also survey-like representations, for example when they have to create a spatial representation of the environment on the basis of a tactile map. Tactile maps are useful tools because they include only the essential spatial information (reducing the irrelevant noise) offering an almost simultaneous view of the to-be-represented space (e.g., Golledge, 1991, Ungar et al., 1993, Ungar et al., 1994, Ungar et al., 1995, Ungar et al., 1996a, Ungar et al., 1996b). In a study conducted to assess the effect of different instructions (direct experience vs. tactile map exploration vs. verbal description) on acquiring knowledge about a real unfamiliar space in a group of blind participants, it was found that even a brief exploration of a tactile map (but not a verbal description of the route) facilitated blind individuals’ orientation and mobility in that environment (Espinosa et al., 1998). Further, Tinti et al. (2006) demonstrated that the blind can be even more efficient than blindfolded sighted individuals in a series of survey-representation-based tasks. These findings demonstrate that blind people can take advantage of the survey-like or aerial perspective of a map, to apply a scale transformation and to translate a series of two-dimensional relationships into a real three-dimensional space (Espinosa et al., 1998).

It has been suggested that blind individuals lack good exploratory strategies (Thinus-Blanc and Gaunet, 1997), but also that – due to their typical exploratory constraints – they tend to rely on more egocentric and experience-based representations. A recent study (Postma et al., 2006) addressed the role of visual processing mechanisms and visual experience in haptic perception of orientation in peripersonal space. Sighted, early blind, and late blind participants were tested in a parallel-setting task requiring to rotate a test bar in order to determine that it was parallel to a previously touched reference bar orientation, either with or without a few seconds delay. It emerged that the performance of the sighted and late blind participants improved in the delayed condition, while the early blind were not significantly affected by the time manipulation. These findings support the hypothesis that a small delay allows for a shift from an egocentric to a more reliable allocentric spatial reference system. Support for this view can also be found in an earlier study by Rossetti et al. (1996), indicating that delaying a pointing action by a few seconds induced the use of an allocentric system of reference in blindfolded sighted participants while early blind subjects tended to rely on a more egocentric system of reference. Hence, early visual experience would allow the development of a specific mechanism responsible for the shift from a more immediate egocentric reference frame to an allocentric one thus enhancing haptic orientation (Postma et al., 2006; see also Newport et al., 2002). Nonetheless, it is worth noting that congenitally blind children may also rely on an allocentric frame of reference in a delayed pointing task (Gaunet et al., 2007) and that they may also be more accurate in pointing than sighted blindfolded controls (Ittyerah et al., 2007).

Different factors might be responsible for these heterogeneous results, such as a different familiarity with the task and the capacity to use visual imagery strategies to cope with task demands (e.g., Ernest, 1987), the difficulty of the required inferences, the type of environment and training (Millar, 1994, Worchel, 1951), the type of response required for distance and direction estimates (Haber et al., 1993), as well as the different mobility skills of the participants (Loomis et al., 1993).

As one might expect, the lack of vision also leads to some limitations in imagery abilities. Haptic input (or other non-visual strategies) will under many circumstances lead to an inferior or at least to a different spatial representation than one supported also by visual input. Even when the level of performance in the blind is comparable to that in the sighted individuals, different strategies may have been employed by the two groups. In fact, there is evidence in performing spatial imagery tasks blind individuals rely on some verbal/propositional descriptions. For instance, Vanlierde and Wanet-Defalque (2004) found a similar performance in blind and sighted subjects in a spatial task requiring the memorization of target locations presented on a series of grids: nonetheless, sighted and late blind subjects reported having taken advantage of a visual process, whereas early-blind participants reported encoding the relevant locations in a XY-coordinates system without visual representation (Vanlierde and Wanet-Defalque, 2004). However, there is also evidence inconsistent with this view. In a task requiring to recall mentally constructed pathways within a matrix, Vecchi (1998) observed that blind and sighted individuals were affected to a comparable extent by a concurrent articulatory suppression task, suggesting that the two groups did not differ in the extent to which they were relying on verbal strategies. Accordingly, Fleming et al. (2006) have recently reported that blind people tend to generate and remember analogue (i.e., non-propositional) representations of objects layouts as much as sighted participants do (Fleming et al., 2006).

A number of studies have found an inferior performance by the blind group compared to normal control in imagery tasks. For instance, Noordzij et al. (2007) found that early blind participants were outperformed by sighted controls in a task requiring to imagine two clock times and decide which one showed the greater angle between the hour and the minute hands of the clock. In one of the first studies on imagery abilities in blindness, Marmor and Zaback (1976) found that response time in deciding whether a rotated shape matched the referent one was similarly affected by the rotation angle in the blind and sighted subjects, but blind perform these tasks at a slower rate than controls (see also Carpenter and Eisenberg, 1978). These and other findings (see Cornoldi et al., 1993, Herman et al., 1983) indicate a critical role for visual experience (especially in the first years of life, see Thinus-Blanc and Gaunet, 1997) for the development of an adequate spatial cognition.

It has been argued that some of these limitations might depend on the degree of familiarity with the experimental material used. For instance, Heller et al. (1999) showed that blind individuals were quite quick and accurate in mentally rotating a series of stimuli when these were familiar to them (e.g., Braille characters). Similarly, when required to haptically recognize a series of alphabetical letters, blind participants performed worse than sighted ones who were visually presented with the same stimuli, whereas performance was comparable when Braille characters (and not alphabetical letters) were presented to the blind group (Bliss et al., 2004). Thus, when the familiarity of test stimulus is controlled for, the haptic skills developed by blind individuals can resemble the efficiency of the visual skills available to sighted participants (Bliss et al., 2004).

Another critical factor in accounting for at least some limitations showed by blind individuals in imagery tasks is the sequential nature of the tactile perceptual experience (Cattaneo and Vecchi, 2008). In a series of studies by De Beni and co-authors (e.g., Cornoldi et al., 1989, De Beni and Cornoldi, 1985a, De Beni and Cornoldi, 1985b), interactive images had to be generated and memorized involving either pairs or triplets or quartets of items. When only two items had to be imagined together, both sighted and blind participants took advantage of the researcher's suggestion to imagine the objects together; as a consequence, their memory for the items was higher than when they relied on a verbal strategy. However, when three or four items had to be mentally combined together, blind subjects experienced severe difficulties, whereas the performance of sighted participants still remained high. The simultaneous perception of many objects is typical of vision while haptic and auditory perception depends on sequential processing. For this reason, it is likely that congenitally blind individuals do not develop efficient processes for simultaneously treating a large amount of information (processing limitation).

The importance of the simultaneous vs. sequential nature of the dominant perceptual experience in modelling cognitive functioning is supported by various studies. For instance, although it is usually more difficult to recognize line drawings by touch than by sight (e.g., Heller, 1989, Lederman et al., 1990), when the visual exploration is made sequential to resemble haptic exploration, the advantage of vision may drastically decrease (Loomis et al., 1991). In the study by Loomis et al. (1991), a series of pictures had to be recognized either haptically or visually. In the haptic task, raised-line drawings had to be explored (either with one or two fingers); in the visual task, the same images were presented on a computer screen but they could be viewed only through a stationary aperture at the centre of the display (with the size of the aperture simulating the use of either one or two fingertips). In this way, vision was “serialised” to resemble the haptic perceptual experience. The results showed that recognition in the serial visual task became as difficult for vision as it was for touch (Loomis et al., 1991).

Hence, the sequential nature of the haptic experience might be a critical factor in determining differences between touch and vision in recognition tasks and probably also at a higher cognitive level. Support for this hypothesis can be found in a recent study by Vecchi et al. (2004) who presented blindfolded sighted and blind participants with a series of wooden-made matrices, on which some target locations (i.e., cubes covered with sandpaper) had to be memorized. Although blind individuals were able to efficiently perform the task when target locations appeared on a single grid, their performance significantly decreased when target locations were presented and had to be retrieved on two different grids (Vecchi et al., 2004). These results show that difficulty in simultaneously treating multiple stimuli does not rely on poor knowledge, since it can be observed with material, which was unfamiliar (not related to long-term knowledge) to both sighted and blind individuals.

In this regard, one might expect serial memory (i.e., the order in which different items are encountered) to be particularly critical in blind individuals for generating a mental picture of the world. In fact, a recent study has demonstrated that blind individuals outperformed sighted controls in a serial memory task in which subjects had to remember both the learned items and their ordinal position in the list (Raz et al., 2007).

Other selective limitations have been observed in imagery abilities of blind individuals. For instance, when required to imagine a pathway along 3D matrices, blind individuals encounter difficulties (e.g., Cornoldi et al., 1991, Cornoldi et al., 1993, Vecchi, 1998, Vecchi et al., 1995). This might be due to a reduced ability in updating the pathway's movements, which is more critical in 3D patterns; the perceptual features of 3D configurations (for example the number of corners, see Heller and Kennedy, 1990); the specific characteristics of the vertical dimension (Cornoldi and Vecchi, 2003), and – more generally – the lack of visual experience with 3D objects. Accordingly, the blind found it difficult to mentally represent objects in perspective (Arditi et al., 1988, Vanlierde and Wanet-Defalque, 2005). However, these results have been questioned by other studies indicating that, under specific circumstances, 3D could serve as a facilitation for blind individuals associated with their higher familiarity with haptically experience 3D objects (Ballesteros and Reales, 2004, Eardley and Pring, 2006b, Revesz, 1950). In this regard, it has been demonstrated that blind individuals are more accurate than sighted ones in representing the size of familiar objects (Smith et al., 2005; see also Bailes and Lambert, 1986). Indeed, sighted but not blind participants tend to overestimate the size of familiar objects when they are required to first haptically explore the objects and then estimate their dimension by indicating a hand span (Smith et al., 2005). Hence, these data suggest that a visual memory of the objects (only familiar objects were used) can result in a disadvantage when the size of these objects has to be judged with hands with no visual feedback (Smith et al., 2005). Moreover, the blind may use perspective laws in drawings (Kennedy and Juricevic, 2006) and they can understand haptic pictures including linear perspective after minimal training (Heller et al., 2005), thus suggesting that the laws of perspective might be part of a blind person's mental imagery.

Section snippets

Reduced vision and cognitive abilities

This section aims to summarize the current knowledge on the cognitive abilities of individuals congenitally affected by severely reduced vision or monocular vision.

A common functional definition of visual impairment is any visual deficit leading to an inability to read a newspaper at a normal distance even with the best refractive correction. More appropriate parameters to evaluate the human visual capacity are the visual acuity and the extent of the visual field.

Visual acuity measures the

Functional brain organization in blind individuals

Research on congenitally blind individuals show that an early loss of visually driven activity may result into a massive functional reorganization, with the deprived cortex being activated by inputs from other sensory modalities (e.g., Amedi et al., 2003, Collignon et al., 2007, Gilbert and Walsh, 2004, Kupers et al., 2006, Raz et al., 2005).

Different forms of tactile recognition or linguistic as well as non-visual sensory and cognitive processes, such as for example Braille reading or verbal

Supramodal organization of the brain

Recent functional studies demonstrate that tactile and auditory information about objects also activates ventral visual association areas in sighted individuals (for a review, Amedi et al., 2005).

These findings regarding a task-specific recruitment of temporal–occipital visual areas, in particular of that lateral occipital cortical region named LOtv (i.e., lateral occipital tactile-visual region), during both visual and tactile recognition of meaningful objects, lends support to the hypothesis

Conclusions

The perceptual limitations of the congenitally blind are reflected at a higher cognitive level, probably because their cognitive mechanisms have developed by touch and hearing, which only allow a sequential processing of information. As a consequence, cognitive functioning of blind individuals seems to be essentially organized in a “sequential” fashion. On the contrary, vision – allowing the simultaneous perception of distinct images – facilitates simultaneous processing at a higher cognitive

Acknowledgment

We wish to thank Juha Silvanto for his comments on the manuscript.

References (232)

  • O. Despres et al.

    The extent of visual deficit and auditory spatial compensation: evidence from self-positioning from auditory cues

    Brain Res. Cogn. Brain Res.

    (2005)
  • A. Dufour et al.

    Improved auditory spatial sensitivity in near-sighted subjects

    Cogn. Brain. Res.

    (2000)
  • D. Ellemberg et al.

    Better perception of global motion after monocular than after binocular deprivation

    Vis. Res.

    (2002)
  • A. Garg et al.

    Orienting auditory spatial attention engages frontal eye fields and medial occipital cortex in congenitally blind humans

    Neuropsychologia

    (2007)
  • E.R. Gizewski et al.

    Cross-modal plasticity for sensory and motor activation patterns in blind subjects

    Neuroimage

    (2003)
  • E.G. Gonzalez et al.

    Foveal and eccentric acuity in one-eyed observers

    Behav. Brain Res.

    (2002)
  • D.L. Adams et al.

    Complete pattern of ocular dominance columns in human primary visual cortex

    J. Neurosci.

    (2007)
  • A. Aleman et al.

    Visual imagery without visual experience: Evidence from congenitally blind people

    Neuroreport

    (2001)
  • A. Amedi et al.

    Transcranial magnetic stimulation of the occipital pole interferes with verbal processing in blind subjects

    Nat. Neurosci.

    (2004)
  • A. Amedi et al.

    Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind

    Nat. Neurosci.

    (2003)
  • A. Amedi et al.

    Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex

    Nat. Neurosci.

    (2007)
  • A. Amedi et al.

    Functional imaging of human crossmodal identification and object recognition

    Exp. Brain Res.

    (2005)
  • P. Arno et al.

    Auditory substitution of vision: pattern recognition by the blind

    Appl. Cognit. Psychol.

    (2001)
  • S.M. Bailes et al.

    Cognitive aspects of haptic form recognition by blind and sighted subjects

    Br. J. Psychol.

    (1986)
  • S. Ballesteros et al.

    Visual and haptic discrimination of symmetry in unfamiliar displays extend in the z-axis

    Perception

    (2004)
  • C.I. Baker et al.

    Reorganization of visual processing in macular degeneration

    J. Neurosci.

    (2005)
  • R. Barbeito

    Sighting from the cyclopean eye: the cyclops effect in preschool children

    Percept. Psychophys.

    (1983)
  • M. Bensafi et al.

    Olfactomotor activity during imagery mimics that during perception

    Nat. Neurosci.

    (2003)
  • A.E. Bigelow

    Blind and sighted children's spatial knowledge of their home environments

    Int. J. Behav. Dev.

    (1996)
  • R. Blake et al.

    Neural synergy between kinetic vision and touch

    Psychol. Sci.

    (2004)
  • F. Blanco et al.

    Haptic exploration and mental estimation of distances in a fictitious island: from mind's eye to mind's hand

    J. Vis. Impair. Blind.

    (2003)
  • F.B. Brady

    A singular view: the art of seeing with one eye

    Medical Economics Books

    (1972)
  • J.R. Brockmole et al.

    The locus of spatial attention during the temporal integration of visual memories and visual percepts

    Psychon. Bull. Rev.

    (2003)
  • C. Büchel et al.

    Different activation patterns in the visual cortex of late and congenitally blind subjects

    Brain

    (1998)
  • C. Büchel et al.

    A multimodal language region in ventral visual pathway

    Nature

    (1998)
  • R. Bull et al.

    The voice-recognition accuracy of blind listeners

    Perception.

    (1983)
  • H. Burton et al.

    Dissociating cortical regions activated by semantic and phonological tasks: a fMRI studying blind and sighted people

    J. Neurophysiol.

    (2003)
  • H. Burton et al.

    Reading embossed capital letters: an fMRI study in blind and sighted individuals

    Hum. Brain Mapp.

    (2006)
  • H. Burton et al.

    Tactile-spatial and cross-modal attention effects in the primary somatosensory cortical areas 3b and 1–2 of rhesus monkeys

    Somatosens. Mot. Res.

    (2000)
  • H. Burton et al.

    Cortical activity to vibrotactile stimulation: an fMRI study in blind and sighted individuals

    Hum. Brain Mapp.

    (2004)
  • H. Burton et al.

    Adaptive changes in early and late blind: a fMRI study of Braille reading

    J. Neurophysiol.

    (2002)
  • H. Burton et al.

    Adaptive changes in early and late blind: an fMRI study of verb generation to hear nouns

    J. Neurophysiol.

    (2002)
  • R.W. Byrne et al.

    Distance and directions in the cognitive maps of the blind

    Can. J. Psychol.

    (1983)
  • P.A. Carpenter et al.

    Mental rotation and the frame of reference in blind and sighted individuals

    Percept. Psychophys.

    (1978)
  • J. Castronovo et al.

    Numerical estimation in blind subjects: evidence of the impact of blindness and its following experience

    J. Exp. Psychol. Hum. Percept. Perform.

    (2007)
  • J. Castronovo et al.

    Semantic numerical representation in blind subjects: the role of vision in the spatial format of the mental number line

    Q. J. Exp. Psychol.

    (2007)
  • Z. Cattaneo et al.

    Supramodality effects in visual and haptic spatial processes

    J. Exp. Psychol. Learn. Mem. Cogn.

    (2008)
  • W. Chen et al.

    Human primary visual cortex and lateral geniculate nucleus activation during visual imagery

    Neuroreport

    (1998)
  • Q. Chen et al.

    Spatial and nonspatial peripheral auditory processing in congenitally blind people

    Neuroreport

    (2006)
  • L.G. Cohen et al.

    Functional relevance of cross-modal plasticity in blind humans

    Nature

    (1997)
  • Cited by (203)

    • Effects of audio-motor training on spatial representations in long-term late blindness

      2022, Neuropsychologia
      Citation Excerpt :

      In blind individuals, residual senses compensate for the lack of vision in some auditory tasks, such as azimuthal localization (Pascual-Leone and Hamilton, 2001; Salminen et al., 2012; Blauert, 2005; Lomber et al., 2010; Bola et al., 2017; Shiell et al., 2014) and sound motion discrimination (Lewald, 2013), or in some haptic tasks, such as haptic object exploration and recognition (Hannagan et al., 2015). On the other hand, a temporary or permanent lack of visual feedback significantly delays or compromises the development of some other spatial abilities (Gori, 2015; Bigelow, 1996; Cappagli et al., 2015; Cattaneo et al., 2008; Koustriava and Papadopoulos, 2010; Ungar et al., 1995) in auditory (Gori et al., 2014; Voss et al., 2015; Lewald, 2002; Kolarik et al., 2013a) and tactile (Ungar et al., 1995; Pasqualotto and Newell, 2007; Postma et al., 2008) modalities. Previous studies have shown substantial differences in spatial processing of auditory information in congenitally (Collignon et al., 2011, 2013; Watkins et al., 2013), early (Collignon et al., 2013; Poirier et al., 2006; Jiang et al., 2014; Voss et al., 2004; Tao et al., 2015) and late blind (Voss et al., 2004; Tao et al., 2015) individuals.

    • Cognitive map formation supported by auditory, haptic, and multimodal information in persons with blindness

      2022, Neuroscience and Biobehavioral Reviews
      Citation Excerpt :

      In addition, it seems that the type of modality itself would not affect the acquisition of route or survey knowledge of a map by PWBs (Miao et al., 2017). PWBs sometimes performed even better on survey-type tasks like distance estimation than sighted persons (Cattaneo et al., 2008; Ittyerah et al., 2007; Tinti et al., 2006). Taken together, all three trends find support in literature.

    View all citing articles on Scopus
    View full text