Is the Richness of Our Visual World an Illusion? Transsaccadic Memory for Complex Scenes

Perception ◽  
1995 ◽  
Vol 24 (9) ◽  
pp. 1075-1081 ◽  
Author(s):  
Susan J Blackmore ◽  
Gavin Brelstaff ◽  
Kay Nelson ◽  
Tom Trościanko

Our construction of a stable visual world, despite the presence of saccades, is discussed. A computer-graphics method was used to explore transsaccadic memory for complex images. Images of real-life scenes were presented under four conditions: they stayed still or moved in an unpredictable direction (forcing an eye movement), while simultaneously changing or staying the same. Changes were the appearance, disappearance, or rotation of an object in the scene. Subjects detected the changes easily when the image did not move but when it moved their performance fell to chance. A grey-out period was introduced to mimic that which occurs during a saccade. This also reduced performance but not to chance levels. These results reveal the poverty of transsaccadic memory for real-life complex scenes. They are discussed with respect to Dennett's view that much less information is available in vision than our subjective impression leads us to believe. Our stable visual world may be constructed out of a brief retinal image and a very sketchy, higher-level representation along with a pop-out mechanism to redirect attention. The richness of our visual world is, to this extent, an illusion.

Physiology ◽  
2001 ◽  
Vol 16 (5) ◽  
pp. 234-238 ◽  
Author(s):  
Bernhard J. M. Hess

The central vestibular system receives afferent information about head position as well as rotation and translation. This information is used to prevent blurring of the retinal image but also to control self-orientation and motion in space. Vestibular signal processing in the brain stem appears to be linked to an internal model of head motion in space.


2014 ◽  
Vol 998-999 ◽  
pp. 806-813
Author(s):  
Jian Wang ◽  
Qing Xu

Realistic image synthesis technology is an important part in computer graphics. Monte Carlo based light simulation methods, such as Monte Carlo path tracing, can deal with complex lighting computations for complex scenes, in the field of realistic image synthesis. Unfortunately, if the samples taken for each pixel are not enough, the generated images have a lot of random noise. Adaptive sampling is attractive to reduce image noise. This paper proposes a new GH-distance based adaptive sampling algorithm. Experimental results show that the method can perform better than other similar ones.


Biosensors ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 343
Author(s):  
Chin-Teng Lin ◽  
Wei-Ling Jiang ◽  
Sheng-Fu Chen ◽  
Kuan-Chih Huang ◽  
Lun-De Liao

In the assistive research area, human–computer interface (HCI) technology is used to help people with disabilities by conveying their intentions and thoughts to the outside world. Many HCI systems based on eye movement have been proposed to assist people with disabilities. However, due to the complexity of the necessary algorithms and the difficulty of hardware implementation, there are few general-purpose designs that consider practicality and stability in real life. Therefore, to solve these limitations and problems, an HCI system based on electrooculography (EOG) is proposed in this study. The proposed classification algorithm provides eye-state detection, including the fixation, saccade, and blinking states. Moreover, this algorithm can distinguish among ten kinds of saccade movements (i.e., up, down, left, right, farther left, farther right, up-left, down-left, up-right, and down-right). In addition, we developed an HCI system based on an eye-movement classification algorithm. This system provides an eye-dialing interface that can be used to improve the lives of people with disabilities. The results illustrate the good performance of the proposed classification algorithm. Moreover, the EOG-based system, which can detect ten different eye-movement features, can be utilized in real-life applications.


1998 ◽  
Vol 10 (4) ◽  
pp. 464-471 ◽  
Author(s):  
Thomas Haarmeier ◽  
Peter Thier

It is usually held that perceptual spatial stability, despite smooth pursuit eye movements, is accomplished by comparing a signal reflecting retinal image slip with an internal reference signal, encoding the eye movement. The important consequence of this concept is that our subjective percept of visual motion reflects the outcome of this comparison rather than retinal image slip. In an attempt to localize the cortical networks underlying this comparison and therefore our subjective percept of visual motion, we exploited an imperfection inherent in it, which results in a movement illusion. If smooth pursuit is carried out across a stationary background, we perceive a tiny degree of illusionary background motion (Filehne illusion, or FI), rather than experiencing the ecologically optimal percept of stationarity. We have recently shown that this illusion can be modified substantially and predictably under laboratory conditions by visual motion unrelated to the eye movement. By making use of this finding, we were able to compare cortical potentials evoked by pursuit-induced retinal image slip under two conditions, which differed perceptually, while being identical physically. This approach allowed us to discern a pair of potentials, a parieto-occipital negativity (N300) followed by a frontal positivity (P300), whose amplitudes were solely determined by the subjective perception of visual motion irrespective of the physical attributes of the situation. This finding strongly suggests that subjective awareness of visual motion depends on neuronal activity in a parietooccipito-frontal network, which excludes the early stages of visual processing.


Author(s):  
Fiona Mulvey

This chapter introduces the basics of eye anatomy, eye movements and vision. It will explain the concepts behind human vision sufficiently for the reader to understand later chapters in the book on human perception and attention, and their relationship to (and potential measurement with) eye movements. We will first describe the path of light from the environment through the structures of the eye and on to the brain, as an introduction to the physiology of vision. We will then describe the image registered by the eye, and the types of movements the eye makes in order to perceive the environment as a cogent whole. This chapter explains how eye movements can be thought of as the interface between the visual world and the brain, and why eye movement data can be analysed not only in terms of the environment, or what is looked at, but also in terms of the brain, or subjective cognitive and emotional states. These two aspects broadly define the scope and applicability of eye movements technology in research and in human computer interaction in later sections of the book.


2017 ◽  
Vol 117 (2) ◽  
pp. 808-817 ◽  
Author(s):  
Kyriaki Mikellidou ◽  
Marco Turi ◽  
David C. Burr

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.


Sign in / Sign up

Export Citation Format

Share Document