scholarly journals Characterization of nonlinear receptive fields of visual neurons by convolutional neural network

2018 ◽  
Author(s):  
Jumpei Ukita ◽  
Takashi Yoshida ◽  
Kenichi Ohki

AbstractA comprehensive understanding of the stimulus-response properties of individual neurons is necessary to crack the neural code of sensory cortices. However, a barrier to achieving this goal is the difficulty of analyzing the nonlinearity of neuronal responses. In computer vision, artificial neural networks, especially convolutional neural networks (CNNs), have demonstrated state-of-the-art performance in image recognition by capturing the higher-order statistics of natural images. Here, we incorporated CNN for encoding models of neurons in the visual cortex to develop a new method of nonlinear response characterization, especially nonlinear estimation of receptive fields (RFs), without assumptions regarding the type of nonlinearity. Briefly, after training CNN to predict the visual responses of neurons to natural images, we synthesized the RF image such that the image would predictively evoke a maximum response (“maximization-of-activation” method). We first demonstrated the proof-of-principle using a dataset of simulated cells with various types of nonlinearity, revealing that CNN could be used to estimate the nonlinear RF of simulated cells. In particular, we could visualize various types of nonlinearity underlying the responses, such as shift-invariant RFs or rotation-invariant RFs. These results suggest that the method may be applicable to neurons with complex nonlinearities, such as rotation-invariant neurons in higher visual areas. Next, we applied the method to a dataset of neurons in the mouse primary visual cortex (V1) whose responses to natural images were recorded via two-photon Ca2+ imaging. We could visualize shift-invariant RFs with Gabor-like shapes for some V1 neurons. By quantifying the degree of shift-invariance, each V1 neuron was classified as either a shift-variant (simple) cell or shift-invariant (complex-like) cell, and these two types of neurons were not clustered in cortical space. These results suggest that the novel CNN encoding model is useful in nonlinear response analyses of visual neurons and potentially of any sensory neurons.

2018 ◽  
Author(s):  
J.J. Pattadkal ◽  
G. Mato ◽  
C. van Vreeswijk ◽  
N. J. Priebe ◽  
D. Hansel

SummaryWe study the connectivity principles underlying the emergence of orientation selectivity in primary visual cortex (V1) of mammals lacking an orientation map. We present a computational model in which random connectivity gives rise to orientation selectivity that matches experimental observations. It predicts that mouse V1 neurons should exhibit intricate receptive fields in the two-dimensional frequency domain, causing shift in orientation preferences with spatial frequency. We find evidence for these features in mouse V1 using calcium imaging and intracellular whole cell recordings.


2019 ◽  
Vol 121 (6) ◽  
pp. 2202-2214 ◽  
Author(s):  
John P. McClure ◽  
Pierre-Olivier Polack

Multimodal sensory integration facilitates the generation of a unified and coherent perception of the environment. It is now well established that unimodal sensory perceptions, such as vision, are improved in multisensory contexts. Whereas multimodal integration is primarily performed by dedicated multisensory brain regions such as the association cortices or the superior colliculus, recent studies have shown that multisensory interactions also occur in primary sensory cortices. In particular, sounds were shown to modulate the responses of neurons located in layers 2/3 (L2/3) of the mouse primary visual cortex (V1). Yet, the net effect of sound modulation at the V1 population level remained unclear. In the present study, we performed two-photon calcium imaging in awake mice to compare the representation of the orientation and the direction of drifting gratings by V1 L2/3 neurons in unimodal (visual only) or multimodal (audiovisual) conditions. We found that sound modulation depended on the tuning properties (orientation and direction selectivity) and response amplitudes of V1 L2/3 neurons. Sounds potentiated the responses of neurons that were highly tuned to the cue’s orientation and direction but weakly active in the unimodal context, following the principle of inverse effectiveness of multimodal integration. Moreover, sound suppressed the responses of neurons untuned for the orientation and/or the direction of the visual cue. Altogether, sound modulation improved the representation of the orientation and direction of the visual stimulus in V1 L2/3. Namely, visual stimuli presented with auditory stimuli recruited a neuronal population better tuned to the visual stimulus orientation and direction than when presented alone. NEW & NOTEWORTHY The primary visual cortex (V1) receives direct inputs from the primary auditory cortex. Yet, the impact of sounds on visual processing in V1 remains controverted. We show that the modulation by pure tones of V1 visual responses depends on the orientation selectivity, direction selectivity, and response amplitudes of V1 neurons. Hence, audiovisual stimuli recruit a population of V1 neurons better tuned to the orientation and direction of the visual stimulus than unimodal visual stimuli.


2000 ◽  
Vol 84 (4) ◽  
pp. 2048-2062 ◽  
Author(s):  
Mitesh K. Kapadia ◽  
Gerald Westheimer ◽  
Charles D. Gilbert

To examine the role of primary visual cortex in visuospatial integration, we studied the spatial arrangement of contextual interactions in the response properties of neurons in primary visual cortex of alert monkeys and in human perception. We found a spatial segregation of opposing contextual interactions. At the level of cortical neurons, excitatory interactions were located along the ends of receptive fields, while inhibitory interactions were strongest along the orthogonal axis. Parallel psychophysical studies in human observers showed opposing contextual interactions surrounding a target line with a similar spatial distribution. The results suggest that V1 neurons can participate in multiple perceptual processes via spatially segregated and functionally distinct components of their receptive fields.


1987 ◽  
Vol 57 (4) ◽  
pp. 977-1001 ◽  
Author(s):  
H. A. Swadlow ◽  
T. G. Weyand

The intrinsic stability of the rabbit eye was exploited to enable receptive-field analysis of antidromically identified corticotectal (CT) neurons (n = 101) and corticogeniculate (CG) neurons (n = 124) in visual area I of awake rabbits. Eye position was monitored to within 1/5 degrees. We also studied the receptive-field properties of neurons synaptically activated via electrical stimulation of the dorsal lateral geniculate nucleus (LGNd). Whereas most CT neurons had either complex (59%) or motion/uniform (15%) receptive fields, we also found CT neurons with simple (9%) and concentric (4%) receptive fields. Most complex CT cells were broadly tuned to both stimulus orientation and velocity, but only 41% of these cells were directionally selective. We could elicit no visual responses from 6% of CT cells, and these cells had significantly lower conduction velocities than visually responsive CT cells. The median spontaneous firing rates for all classes of CT neurons were 4-8 spikes/s. CG neurons had primarily simple (60%) and concentric (9%) receptive fields, and none of these cells had complex receptive fields. CG simple cells were more narrowly tuned to both stimulus orientation and velocity than were complex CT cells, and most (85%) were directionally selective. Axonal conduction velocities of CG neurons (mean = 1.2 m/s) were much lower than those of CT neurons (mean = 6.4 m/s), and CG neurons that were visually unresponsive (23%) had lower axonal conduction velocities than did visually responsive CG neurons. Some visually unresponsive CG neurons (14%) responded with saccadic eye movements. The median spontaneous firing rates for all classes of CG neurons were less than 1 spike/s. All neurons synaptically activated via LGNd stimulation at latencies of less than 2.0 ms had receptive fields that were not orientation selective (89% motion/uniform, 11% concentric), whereas most cells with orientation-selective receptive fields had considerably longer synaptic latencies. Most short-latency motion/uniform neurons responded to electrical stimulation of the LGNd (and visual area II) with a high-frequency burst (500-900 Hz) of three or more spikes. Action potentials of these neurons were of short duration, thresholds of synaptic activation were low, and spontaneous firing rates were the highest seen in rabbit visual cortex. These properties are similar to those reported for interneurons in several regions in mammalian central nervous system. Nonvisual sensory stimuli that resulted in electroencephalographic arousal (hippocampal theta activity) had a profound effect on the visual responses of many visual cortical neurons.(ABSTRACT TRUNCATED AT 400 WORDS)


2017 ◽  
Author(s):  
Amelia J. Christensen ◽  
Jonathan W. Pillow

Running profoundly alters stimulus-response properties in mouse primary visual cortex (V1), but its effects in higher-order visual cortex remain unknown. Here we systematically investigated how locomotion modulates visual responses across six visual areas and three cortical layers using a massive dataset from the Allen Brain Institute. Although running has been shown to increase firing in V1, we found that it suppressed firing in higher-order visual areas. Despite this reduction in gain, visual responses during running could be decoded more accurately than visual responses during stationary periods. We show that this effect was not attributable to changes in noise correlations, and propose that it instead arises from increased reliability of single neuron responses during running.


2021 ◽  
Author(s):  
Yulia Revina ◽  
Lucy S Petro ◽  
Cristina B Denk-Florea ◽  
Isa S Rao ◽  
Lars Muckli

The majority of synaptic inputs to the primary visual cortex (V1) are non-feedforward, instead originating from local and anatomical feedback connections. Animal electrophysiology experiments show that feedback signals originating from higher visual areas with larger receptive fields modulate the surround receptive fields of V1 neurons. Theories of cortical processing propose various roles for feedback and feedforward processing, but systematically investigating their independent contributions to cortical processing is challenging because feedback and feedforward processes coexist even in single neurons. Capitalising on the larger receptive fields of higher visual areas compared to primary visual cortex (V1), we used an occlusion paradigm that isolates top-down influences from feedforward processing. We utilised functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis methods in humans viewing natural scene images. We parametrically measured how the availability of contextual information determines the presence of detectable feedback information in non-stimulated V1, and how feedback information interacts with feedforward processing. We show that increasing the visibility of the contextual surround increases scene-specific feedback information, and that this contextual feedback enhances feedforward information. Our findings are in line with theories that cortical feedback signals transmit internal models of predicted inputs.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Christopher Kanan

When independent component analysis (ICA) is applied to color natural images, the representation it learns has spatiochromatic properties similar to the responses of neurons in primary visual cortex. Existing models of ICA have only been applied to pixel patches. This does not take into account the space-variant nature of human vision. To address this, we use the space-variant log-polar transformation to acquire samples from color natural images, and then we apply ICA to the acquired samples. We analyze the spatiochromatic properties of the learned ICA filters. Qualitatively, the model matches the receptive field properties of neurons in primary visual cortex, including exhibiting the same opponent-color structure and a higher density of receptive fields in the foveal region compared to the periphery. We also adopt the “self-taught learning” paradigm from machine learning to assess the model’s efficacy at active object and face classification, and the model is competitive with the best approaches in computer vision.


2014 ◽  
Vol 26 (4) ◽  
pp. 693-711 ◽  
Author(s):  
Peng Qi ◽  
Xiaolin Hu

It is well known that there exist nonlinear statistical regularities in natural images. Existing approaches for capturing such regularities always model the image intensities by assuming a parameterized distribution for the intensities and learn the parameters. In the letter, we propose to model the outer product of image intensities by assuming a gaussian distribution for it. A two-layer structure is presented, where the first layer is nonlinear and the second layer is linear. Trained on natural images, the first-layer bases resemble the receptive fields of simple cells in the primary visual cortex (V1), while the second-layer units exhibit some properties of the complex cells in V1, including phase invariance and masking effect. The model can be seen as an approximation of the covariance model proposed in Karklin and Lewicki ( 2009 ) but has more robust and efficient learning algorithms.


2019 ◽  
Author(s):  
Dechen Liu ◽  
Juan Deng ◽  
Zhewei Zhang ◽  
Zhi-Yu Zhang ◽  
Yan-Gang Sun ◽  
...  

AbstractThe orbitofrontal cortex (OFC) encodes expected outcomes and plays a critical role in flexible, outcome-guided behavior. The OFC projects to primary visual cortex (V1), yet the function of this top-down projection is unclear. We find that optogenetic activation of OFC projection to V1 reduces the amplitude of V1 visual responses via the recruitment of local somatostatin-expressing (SST) interneurons. Using mice performing a Go/No-Go visual task, we show that the OFC projection to V1 mediates the outcome-expectancy modulation of V1 responses to the reward-irrelevant No-Go stimulus. Furthermore, V1-projecting OFC neurons reduce firing during expectation of reward. In addition, chronic optogenetic inactivation of OFC projection to V1 impairs, whereas chronic activation of SST interneurons in V1 improves the learning of Go/No-Go visual task, without affecting the immediate performance. Thus, OFC top-down projection to V1 is crucial to drive visual associative learning by modulating the response gain of V1 neurons to non-relevant stimulus.


2019 ◽  
Author(s):  
Yosef Singer ◽  
Ben D. B. Willmore ◽  
Andrew J. King ◽  
Nicol S. Harper

Visual neurons respond selectively to specific features that become increasingly complex in their form and dynamics from the eyes to the cortex. Retinal neurons prefer localized flashing spots of light, primary visual cortical (V1) neurons moving bars, and those in higher cortical areas, such as middle temporal (MT) cortex, favor complex features like moving textures. Whether there are general computational principles behind this diversity of response properties remains unclear. To date, no single normative model has been able to account for the hierarchy of tuning to dynamic inputs along the visual pathway. Here we show that hierarchical application of temporal prediction - representing features that efficiently predict future sensory input from past sensory input - can explain how neuronal tuning properties, particularly those relating to motion, change from retina to higher visual cortex. This suggests that the brain may not have evolved to efficiently represent all incoming information, as implied by some leading theories. Instead, the selective representation of sensory inputs that help in predicting the future may be a general neural coding principle, which when applied hierarchically extracts temporally-structured features that depend on increasingly high-level statistics of the sensory input.


Sign in / Sign up

Export Citation Format

Share Document