scholarly journals Looming motion primes the visuomotor system.

2014 ◽  
Vol 40 (2) ◽  
pp. 566-579 ◽  
Author(s):  
Paul A. Skarratt ◽  
Angus R. H. Gellatly ◽  
Geoff G. Cole ◽  
Michael Pilling ◽  
Johan Hulleman
Keyword(s):  
1992 ◽  
Vol 337 (1281) ◽  
pp. 283-294 ◽  

Airborne insects are miniature w ing-flapping aircraft the visually guided manoeuvres of which depend on analogue, ‘fly-by-wire’ controls. The front-end of their visuomotor system consists of a pair of com pound eyes which are masterpieces of integrated optics and neural design. They rely on an array of passive sensors driving an orderly analogue neural network. We explored in concrete terms how motion-detecting neurons might possibly be used to solve navigational tasks involving obstacle avoidance in a creature whose wings are exquisitely guided by eyes with a poor spatial resolution. We designed, simulated, and built a complete terrestrial creature which moves about and avoids obstacles solely by evaluating the relative motion between itself and the environment. The compound eye uses an array of elementary motion detectors (emds) as smart, passive ranging sensors. Like its physiological counterpart, the visuomotor system is based on analogue, continuous-time processing and does not make use of conventional computers. It uses hardly any memory to adjust the robot’s heading in real time via a local and intermittent visuomotor feedback loop. This paper shows that the understanding of some invertebrate sensory-motor systems has now reached a level able to provide valuable design hints. Our approach brings into prominence the mutual constraints in the designs of a sensory and a motor system, in both living and non-living ambulatory creatures.


Author(s):  
Matthew Heath ◽  
Kristina A. Neely ◽  
Olav Krigolson ◽  
Gordon Binsted
Keyword(s):  

2010 ◽  
Vol 103 (4) ◽  
pp. 2114-2123 ◽  
Author(s):  
Stephen A. Coombes ◽  
Daniel M. Corcos ◽  
Lisa Sprute ◽  
David E. Vaillancourt

When humans perform movements and receive on-line visual feedback about their performance, the spatial qualities of the visual information alter performance. The spatial qualities of visual information can be altered via the manipulation of visual gain and changes in visual gain lead to changes in force error. The current study used functional magnetic resonance imaging during a steady-state precision grip force task to examine how cortical and subcortical brain activity can change with visual gain induced changes in force error. Small increases in visual gain <1° were associated with a substantial reduction in force error and a small increase in the spatial amplitude of visual feedback. These behavioral effects corresponded with an increase in activation bilaterally in V3 and V5 and in left primary motor cortex and left ventral premotor cortex. Large increases in visual gain >1° were associated with a small change in force error and a large change in the spatial amplitude of visual feedback. These behavioral effects corresponded with increased activity bilaterally in dorsal and ventral premotor areas and right inferior parietal lobule. Finally, activity in the left and right lobule VI of the cerebellum and left and right putamen did not change with increases in visual gain. Together, these findings demonstrate that the visuomotor system does not respond uniformly to changes in the gain of visual feedback. Instead, specific regions of the visuomotor system selectively change in activity related to large changes in force error and large changes in the spatial amplitude of visual feedback.


2001 ◽  
Vol 7 (3) ◽  
pp. 334-343 ◽  
Author(s):  
LOUISE A. CORBEN ◽  
JASON B. MATTINGLEY ◽  
JOHN L. BRADSHAW

Patients with left spatial neglect following right hemisphere damage may show anomalies in ipsilesional-limb movements directed to targets on their affected side, in addition to their characteristic perceptual deficits. In this study we examined the extent to which visually guided movements made by neglect patients are susceptible to interference from concurrent visual distractors on the contralesional or ipsilesional side of a designated target. Eleven right hemisphere patients with visual neglect, plus 11 matched healthy controls, performed a double-step movement task upon a digitizing tablet, using their ipsilesional hand to respond. On each double-step trial the first component of the movement was cued to a common central target, whereas the second component was cued unpredictably to a target on either the contralesional or ipsilesional side. On separate trials lateral targets either appeared alone or together with a concurrent distractor in an homologous location in the opposite hemispace. In addition to being significantly slower and more error prone than controls, neglect patients also exhibited a number of interference effects from ipsilesional distractors. They often failed to move to left targets in the presence of a right-sided distractor, or else they moved to the distractor itself rather than to a contralesional target. The initial accelerative phase of their movements to contralesional targets tended to be interrupted prematurely, and they spent significantly more time in the terminal guidance phase of movements to contralesional targets in the presence of an ipsilesional distractor. In contrast, contralesional distractors had little effect on patients' movements to ipsilesional targets. We conclude that right hemisphere damage induces a competitive bias that favors actions to ipsilesional targets. This bias affects multiple stages of processing within the visuomotor system, from initial programming through to the final stages of terminal guidance. (JINS, 2001, 7, 334–343)


2015 ◽  
Vol 114 (4) ◽  
pp. 2242-2248 ◽  
Author(s):  
Chiara Bozzacchi ◽  
Fulvio Domini

Recent studies on visuomotor processes using virtual setups have suggested that actions are affected by similar biases as perceptual tasks. In particular, a strong lack of depth constancy is revealed, resembling biases in perceptual estimates of relative depth. With this study we aim to understand whether these findings are mostly caused by a lack of metric accuracy of the visuomotor system or by the limited cues provided by the use of virtual reality. We addressed this issue by comparing grasping movements towards a spherical object located at four distances (420, 450, 480, and 510 mm) performed in three conditions: 1) virtual, in which the target was a virtual object defined by binocular cues, 2) glow-in-the-dark, in which the object was painted with luminous paint but no other cue was provided, and 3) full-cue, in which the movement was performed with the lights on and all the environmental information was available. Results revealed a striking effect of object distance on grip aperture equally in all three conditions. Specifically, grip aperture gradually decreased with increase in object distance, proving a consistent lack of depth constancy. These findings clearly demonstrate that systematic biases in grasping actions are not induced by the use of virtual environments and that action and perception may involve the same visual information, which does not engage a metric reconstruction of the scene.


1979 ◽  
Vol 17 (3-4) ◽  
pp. 281-294 ◽  
Author(s):  
B.A. Karpov ◽  
Ya.A. Meerson ◽  
I.M. Tonkonogii

2008 ◽  
Vol 39 (01) ◽  
Author(s):  
V Wenkeler ◽  
T Hassa ◽  
O Tüscher ◽  
C Weiller ◽  
C Dettmers

Sign in / Sign up

Export Citation Format

Share Document