Depth Illusion by Delayed 3-D Perception (‘Delayed Stereopsis Illusion’): A Novel Way to Determine Computation Times in Human Vision by Depth Reversal in Partially Occluded Moving Objects

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 181-181 ◽  
Author(s):  
R Wolf ◽  
M Schuchardt ◽  
R Rosenzweig

Viewed through depth-reversing spectacles, nontransparent objects appear to cut ‘gaps’ into a patterned background. In moving objects this gap is seen to extend beyond the occluded area (‘delayed stereopsis illusion’, DSI): Its trailing border appears to lag behind by a precisely measurable distance, indicating a processing time of approximately 0.13 s to accomplish stereopsis [cf Morgan and Castet, 1995 Nature (London)378 380 – 383]. Other than in thigmaesthesia, there is no correction by antedating. Why is this delay not perceived in normal stereopsis? If an object is moving before some background, the background usually maintains its position; it may be occluded, or not. Depth information thus might be extrapolated to the continuously uncovered regions of the patterned background. Depth reversal demands that the occluded region of the background must jump behind the moving, occluding object. As this object is perceived to retain its distance, the background, as it is getting uncovered, must jump back into the foreground, where it can be perceived only after renewed calculation of binocular depth. The dependence of DSI on eye movements, disparity, velocity, motion direction, surface texture, illuminance, spatial frequency, and fractal dimension of the objects involved is currently being investigated in model systems which allow us to determine processing times of human stereopsis under well-defined conditions.

2020 ◽  
Author(s):  
MB Maina ◽  
U Ahmad ◽  
HA Ibrahim ◽  
SK Hamidu ◽  
FE Nasr ◽  
...  

Understanding the function and dysfunction of the brain remains one of the key challenges of our time. However, an overwhelming majority of brain research is carried out in the Global North, by a minority of well-funded and intimately interconnected labs. In contrast, with an estimated one neuroscientist per million people in Africa, news about neuroscience research from the Global South remains sparse. Clearly, devising new policies to boost Africa’s neuroscience landscape is imperative. However, the policy must be based on accurate data, which is largely lacking. Such data must reflect the extreme heterogeneity of research outputs across the continent’s 54 countries distributed over an area larger than USA, Europe and China combined. Here, we analysed all of Africa’s Neuroscience output over the past 21 years. Uniquely, we individually verified in each of 12,326 publications that the work was indeed performed in Africa and led by African-based researchers. This step is critical: previous estimates grossly inflated figures, because many of Africa’s high-visibility publications are in fact the result of internationally led collaborations, with most work done outside of Africa. The remaining number of African-led Neuroscience publications was 5,219, on average only ~5 per country and year. From here, we extracted metrics such as the journal and citations, as well as detailed information on funding, international collaborations and the techniques and model systems used. We link these metrics to demographic data and indicators of mobility and economy. For reference, we also extracted the same metrics from 220 randomly selected publications each from the UK, USA, Australia, Japan and Brazil. Our unique dataset allows us to gain accurate and in-depth information on the current state of African Neuroscience research, and to put it into a global context. This in turn allows us to make actionable recommendations on how African research might best be supported in the future.


2001 ◽  
Vol 13 (6) ◽  
pp. 1243-1253 ◽  
Author(s):  
Rajesh P. N. Rao ◽  
David M. Eagleman ◽  
Terrence J. Sejnowski

When a flash is aligned with a moving object, subjects perceive the flash to lag behind the moving object. Two different models have been proposed to explain this “flash-lag” effect. In the motion extrapolation model, the visual system extrapolates the location of the moving object to counteract neural propagation delays, whereas in the latency difference model, it is hypothesized that moving objects are processed and perceived more quickly than flashed objects. However, recent psychophysical experiments suggest that neither of these interpretations is feasible (Eagleman & Sejnowski, 2000a, 2000b, 2000c), hypothesizing instead that the visual system uses data from the future of an event before committing to an interpretation. We formalize this idea in terms of the statistical framework of optimal smoothing and show that a model based on smoothing accounts for the shape of psychometric curves from a flash-lag experiment involving random reversals of motion direction. The smoothing model demonstrates how the visual system may enhance perceptual accuracy by relying not only on data from the past but also on data collected from the immediate future of an event.


2014 ◽  
Vol 21 (1) ◽  
pp. 27-34
Author(s):  
Bin Sun ◽  
Chaobo Min ◽  
Junju Zhang ◽  
Bengkang Chang ◽  
Yingjie Li ◽  
...  

2013 ◽  
Vol 385-386 ◽  
pp. 1509-1512
Author(s):  
Lian Li ◽  
Yong Peng Liu

Today the existing image processing systems widely used standard definition resolution. Which is not enough distinct. High definition (HD) and intelligence gradually become the developing trend of the image acquisition and processing system. Motion detection plays an important role in video surveillance system. The sign distribution features will be covered up by the use of the absolute differential image. In this article, a method to determine the motion direction of moving objects by using the sign distribution features in the differential image of two consecutive frames is proposed. To extract the characteristics of the moving object regions,Other parts as the background image is still. The transmission should been stopped, if there is no moving object. These should save storage space and reduce the demand for network speed. Experimental results show that algorithm of the method is suitable for computer processing.


2021 ◽  
Author(s):  
Gvarami Labartkava

Human vision is a complex system which involves processing frames and retrieving information in a real-time with optimization of the memory, energy and computational resources usage. It can be widely utilized in many real-world applications from security systems to space missions. The research investigates fundamental principles of human vision and accordingly develops a FPGA-based video processing system with binocular vision, capable of high performance and real-time tracking of moving objects in 3D space. The undertaken research and implementation consist of: 1. Analysis of concepts and methods of human vision system; 2. Development stereo and peripheral vision prototype of a system-on-programmable chip (SoPC) for multi-object motion detection and tracking; 3. Verification, test run and analysis of the experimental results gained on the prototype and associated with the performance constraints; The implemented system proposes a platform for real-time applications which are limited in current approaches.


2019 ◽  
Author(s):  
Tatjana Seizova-Cajic ◽  
Sandra Ludvigsson ◽  
Birger Sourander ◽  
Melinda Popov ◽  
Janet L Taylor

I.ABSTRACTAn age-old hypothesis proposes that object motion across the receptor surface organizes sensory maps (Lotze, 19th century). Skin patches learn their relative positions from the order in which they are stimulated during motion events. We test this idea by reversing the local motion within a 6-point apparent motion sequence along the forearm. In the ‘Scrambled’ sequence, two middle locations were touched in reversed order (1-2-4-3-5-6, followed by 6-5-3-4-2-1, in a continuous loop). This created a local acceleration, a double U-turn, within an otherwise constant-velocity motion, as if the physical location of skin patches 3 and 4 was surgically swapped. The control condition, ‘Orderly’, proceeded at constant velocity at inter-stimulus onset interval (ISOI) of 120 ms. In the test, our twenty participants reported motion direction between the two middle tactors, presented on their own at 75, 120 or 190-ms ISOI. Results show degraded motion discrimination following exposure to Scrambled pattern: for the 120-ms test stimulus, it was 0.31 d’ weaker than following Orderly conditioning (p = .007). This is the aftereffect we expected; its maximal expression would be a complete reversal in perceived motion direction between locations 3 and 4 for either motion direction. We propose that the somatosensory system was beginning to ‘correct’ reversed local motion to uncurl and remove the U-turns that always occurred on the same part of the receptor surface. Such de-correlation between accelerations and their location on the sensory surface is one possible mechanism for organization of sensory maps.


2019 ◽  
Vol 286 (1896) ◽  
pp. 20182045 ◽  
Author(s):  
Wendy J. Adams ◽  
Erich W. Graf ◽  
Matt Anderson

Many species employ camouflage to disguise their true shape and avoid detection or recognition. Disruptive coloration is a form of camouflage in which high-contrast patterns obscure internal features or break up an animal's outline. In particular, edge enhancement creates illusory, or ‘fake’ depth edges within the animal's body. Disruptive coloration often co-occurs with background matching, and together, these strategies make it difficult for an observer to visually segment an animal from its background. However, stereoscopic vision could provide a critical advantage in the arms race between perception and camouflage: the depth information provided by binocular disparities reveals the true three-dimensional layout of a scene, and might, therefore, help an observer to overcome the effects of disruptive coloration. Human observers located snake targets embedded in leafy backgrounds. We analysed performance (response time) as a function of edge enhancement, illumination conditions and the availability of binocular depth cues. We confirm that edge enhancement contributes to effective camouflage: observers were slower to find snakes whose patterning contains ‘fake’ depth edges. Importantly, however, this effect disappeared when binocular depth cues were available. Illumination also affected detection: under directional illumination, where both the leaves and snake produced strong cast shadows, snake targets were localized more quickly than in scenes rendered under ambient illumination. In summary, we show that illusory depth edges, created via disruptive coloration, help to conceal targets from human observers. However, cast shadows and binocular depth information improve detection by providing information about the true three-dimensional structure of a scene. Importantly, the strong interaction between disparity and edge enhancement suggests that stereoscopic vision has a critical role in breaking camouflage, enabling the observer to overcome the disruptive effects of edge enhancement.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 36-36
Author(s):  
C von Pichler ◽  
S Fischer ◽  
K Radermacher ◽  
G Rau

Monocular video endoscopic systems are established in the clinical routine of surgical endoscopy. The introduction of 3-D video systems could improve visualisation of the intracorporal operating site because of the stereoscopic depth information. The goal of our investigations has been to quantify the influence of this visualisation technology on visual perception, on visually controlled endoscopic manipulations, and on the intraoperative performance, including ergonomic and psychophysical aspects. These results are used to define guidelines for improvement and for the integration of such systems into clinical routine so as to achieve optimal support of the medical team. The comparison of 2-D and 3-D video endoscopic systems showed a general improvement in the performance of endoscopic procedures. However, 30% – 50% of the users had perceptive problems with 3-D endoscopy. To study the problems quantitatively, we compared the case of stereoscopic visualisation with the real situation of direct view onto the specific objects. The users with problems had insufficient binocular depth perception of stereoscopic images for visual discrimination tasks, although their depth perception of real objects was good. Analysis of their eye movements showed significant differences compared with those of users with good binocular depth perception of stereo images. In particular, there were differences in the relation of vergence movements and accommodation. When we compared visually guided manipulations under stereoscopic video sight and direct view, we found the overall manipulative performance of all users to be the same, but the users with problems showed a lower performance in general. The experimental design and the results are discussed in detail.


1997 ◽  
Author(s):  
Ik Soo Choy ◽  
Yonggil Sin ◽  
Jong-An Park

Perception ◽  
1983 ◽  
Vol 12 (6) ◽  
pp. 707-717 ◽  
Author(s):  
Cynthia Owsley

Previous research has shown that infants as young as the first few months of life perceive several aspects of the three-dimensional environment. Yet we know relatively little about the visual depth information which serves as a basis for their spatial capacities. A study is reported in which a visual habituation procedure was used to examine what types of optical depth information four-month-old infants find useful in visually perceiving solid (three-dimensional) shape. Results imply that in the absence of binocular depth cues four-month-olds rely on kinetic depth information to perceive solid shape.


Sign in / Sign up

Export Citation Format

Share Document