primate vision
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 10)

H-INDEX

10
(FIVE YEARS 1)

i-Perception ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 204166952110626
Author(s):  
Gideon Paul Caplovitz

Retinal painting, anorthoscopic perception and amodal completion are terms to describe visual phenomena that highlight the spatiotemporal integrative mechanisms that underlie primate vision. Although commonly studied using simplified lab-friendly stimuli presented on a computer screen, this is a report of observations made in a novel real-world context that highlight the rich contributions the mechanisms underlying these phenomena make to naturalistic vision.


2021 ◽  
Vol 15 ◽  
Author(s):  
Samuel Spiteri ◽  
David Crewther

The 21st century has seen dramatic changes in our understanding of the visual physio-perceptual anomalies of autism and also in the structure and development of the primate visual system. This review covers the past 20 years of research into motion perceptual/dorsal stream anomalies in autism, as well as new understanding of the development of primate vision. The convergence of this literature allows a novel developmental hypothesis to explain the physiological and perceptual differences of the broad autistic spectrum. Central to these observations is the development of motion areas MT+, the seat of the dorsal cortical stream, central area of pre-attentional processing as well as being an anchor of binocular vision for 3D action. Such development normally occurs via a transfer of thalamic drive from the inferior pulvinar → MT to the anatomically stronger but later-developing LGN → V1 → MT connection. We propose that autistic variation arises from a slowing in the normal developmental attenuation of the pulvinar → MT pathway. We suggest that this is caused by a hyperactive amygdala → thalamic reticular nucleus circuit increasing activity in the PIm → MT via response gain modulation of the pulvinar and hence altering synaptic competition in area MT. We explore the probable timing of transfer in dominance of human MT from pulvinar to LGN/V1 driving circuitry and discuss the implications of the main hypothesis.


2021 ◽  
Author(s):  
Kohitij Kar

Abstract Despite ample behavioral evidence of atypical facial emotion processing in individuals with autism (IwA), the neural underpinnings of such behavioral heterogeneities remain unclear. Here, I have used brain-tissue mapped artificial neural network (ANN) models of primate vision to probe candidate neural and behavior markers of atypical facial emotion recognition in IwA at an image-by-image level. Interestingly, the ANNs' image-level behavioral patterns better matched the neurotypical subjects' behavior than those measured in IwA. This behavioral mismatch was most remarkable when the ANN behavior was decoded from units that correspond to the primate inferior temporal (IT) cortex. ANN-IT responses also explained a significant fraction of the image-level behavioral predictivity associated with neural activity in the human amygdala — strongly suggesting that the previously reported facial emotion intensity encodes in the human amygdala could be primarily driven by projections from the IT cortex. Furthermore, in silico experiments revealed how learning under noisy sensory representations could lead to atypical facial emotion processing that better matches the image-level behavior observed in IwA. In sum, these results identify primate IT activity as a candidate neural marker and demonstrate how ANN models of vision can be used to generate neural circuit-level hypotheses and guide future human and non-human primate studies in autism.


2021 ◽  
Author(s):  
Kohitij Kar

AbstractDespite ample behavioral evidence of atypical facial emotion processing in individuals with autism (IwA), the neural underpinnings of such behavioral heterogeneities remain unclear. Here, I have used brain-tissue mapped artificial neural network (ANN) models of primate vision to probe candidate neural and behavior markers of atypical facial emotion recognition in IwA at an image-by-image level. Interestingly, the ANNs’ image-level behavioral patterns better matched the neurotypical subjects’ behavior than those measured in IwA. This behavioral mismatch was most remarkable when the ANN behavior was decoded from units that correspond to the primate inferior temporal (IT) cortex. ANN-IT responses also explained a significant fraction of the image-level behavioral predictivity associated with neural activity in the human amygdala — strongly suggesting that the previously reported facial emotion intensity encodes in the human amygdala could be primarily driven by projections from the IT cortex. Furthermore, in silico experiments revealed how learning under noisy sensory representations could lead to atypical facial emotion processing that better matches the image-level behavior observed in IwA. In sum, these results identify primate IT activity as a candidate neural marker and demonstrate how ANN models of vision can be used to generate neural circuit-level hypotheses and guide future human and non-human primate studies in autism.


Author(s):  
Satyabrat Malla Bujar Baruah ◽  
Deepsikha Nandi ◽  
Plabita Gogoi ◽  
Soumik Roy
Keyword(s):  

2021 ◽  
Vol 288 (1943) ◽  
pp. 20202981
Author(s):  
Renske E. Onstein ◽  
Daphne N. Vink ◽  
Jorin Veen ◽  
Christopher D. Barratt ◽  
S. G. A. Flantua ◽  
...  

eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Marin Dujmović ◽  
Gaurav Malhotra ◽  
Jeffrey S Bowers

Deep convolutional neural networks (DCNNs) are frequently described as the best current models of human and primate vision. An obvious challenge to this claim is the existence of adversarial images that fool DCNNs but are uninterpretable to humans. However, recent research has suggested that there may be similarities in how humans and DCNNs interpret these seemingly nonsense images. We reanalysed data from a high-profile paper and conducted five experiments controlling for different ways in which these images can be generated and selected. We show human-DCNN agreement is much weaker and more variable than previously reported, and that the weak agreement is contingent on the choice of adversarial images and the design of the experiment. Indeed, we find there are well-known methods of generating images for which humans show no agreement with DCNNs. We conclude that adversarial images still pose a challenge to theorists using DCNNs as models of human vision.


2020 ◽  
Author(s):  
Marin Dujmović ◽  
Gaurav Malhotra ◽  
Jeffrey Bowers

AbstractDeep convolutional neural networks (DCNNs) are frequently described as promising models of human and primate vision. An obvious challenge to this claim is the existence of adversarial images that fool DCNNs but are uninterpretable to humans. However, recent research has suggested that there may be similarities in how humans and DCNNs interpret these seemingly nonsense images. In this study, we reanalysed data from a high-profile paper and conducted four experiments controlling for different ways in which these images can be generated and selected. We show that agreement between humans and DCNNs is much weaker and more variable than previously reported, and that the weak agreement is contingent on the choice of adversarial images and the design of the experiment. Indeed, it is easy to generate images with no agreement. We conclude that adversarial images still challenge the claim that DCNNs constitute promising models of human and primate vision.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Todd R Appleby ◽  
Michael B Manookin

To efficiently navigate through the environment and avoid potential threats, an animal must quickly detect the motion of approaching objects. Current models of primate vision place the origins of this complex computation in the visual cortex. Here, we report that detection of approaching motion begins in the retina. Several ganglion cell types, the retinal output neurons, show selectivity to approaching motion. Synaptic current recordings from these cells further reveal that this preference for approaching motion arises in the interplay between presynaptic excitatory and inhibitory circuit elements. These findings demonstrate how excitatory and inhibitory circuits interact to mediate an ethologically relevant neural function. Moreover, the elementary computations that detect approaching motion begin early in the visual stream of primates.


2019 ◽  
Author(s):  
Kamila Maria Jozwik ◽  
Martin Schrimpf ◽  
Nancy Kanwisher ◽  
James J. DiCarlo

AbstractSpecific deep artificial neural networks (ANNs) are the current best models of ventral visual processing and object recognition behavior in monkeys. We here explore whether models of non-human primate vision generalize to visual processing in the human primate brain. Specifically, we asked if model match to monkey IT is a predictor of model match to human IT, even when scoring those matches on different images. We found that the model match to monkey IT is a positive predictor of the model match to human IT (R = 0.36), and that this approach outperforms the current standard predictor of model accuracy on ImageNet. This suggests a more powerful approach for pre-selecting models as hypotheses of human brain processing.


Sign in / Sign up

Export Citation Format

Share Document