scholarly journals Learning precise spatiotemporal sequences via biophysically realistic learning rules in a modular, spiking network

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Ian Cone ◽  
Harel Z Shouval

Multiple brain regions are able to learn and express temporal sequences, and this functionality is an essential component of learning and memory. We propose a substrate for such representations via a network model that learns and recalls discrete sequences of variable order and duration. The model consists of a network of spiking neurons placed in a modular microcolumn based architecture. Learning is performed via a biophysically realistic learning rule that depends on synaptic ‘eligibility traces’. Before training, the network contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically plausible sequence learning and memory, in agreement with recent experimental results.

2020 ◽  
Author(s):  
I. Cone ◽  
H. Z. Shouval

AbstractThe ability to express and learn temporal sequences is an essential part of learning and memory. Learned temporal sequences are expressed in multiple brain regions and as such there may be common design in the circuits that mediate it. This work proposes a substrate for such representations, via a biophysically realistic network model that can robustly learn and recall discrete sequences of variable order and duration. The model consists of a network of spiking leaky-integrate-and-fire model neurons placed in a modular architecture designed to resemble cortical microcolumns. Learning is performed via a learning rule with “eligibility traces”, which hold a history of synaptic activity before being converted into changes in synaptic strength upon neuromodulator activation. Before training, the network responds to incoming stimuli, and contains no memory of any particular sequence. After training, presentation of only the first element in that sequence is sufficient for the network to recall an entire learned representation of the sequence. An extended version of the model also demonstrates the ability to successfully learn and recall non-Markovian sequences. This model provides a possible framework for biologically realistic sequence learning and memory, and is in agreement with recent experimental results, which have shown sequence dependent plasticity in sensory cortex.


2009 ◽  
Vol 21 (11) ◽  
pp. 2991-3009 ◽  
Author(s):  
Lucas C. Parra ◽  
Jeffrey M. Beck ◽  
Anthony J. Bell

A feedforward spiking network represents a nonlinear transformation that maps a set of input spikes to a set of output spikes. This mapping transforms the joint probability distribution of incoming spikes into a joint distribution of output spikes. We present an algorithm for synaptic adaptation that aims to maximize the entropy of this output distribution, thereby creating a model for the joint distribution of the incoming point processes. The learning rule that is derived depends on the precise pre- and postsynaptic spike timings. When trained on correlated spike trains, the network learns to extract independent spike trains, thereby uncovering the underlying statistical structure and creating a more efficient representation of the incoming spike trains.


2000 ◽  
Vol 23 (4) ◽  
pp. 550-551
Author(s):  
Mikhail N. Zhadin

The absence of a clear influence of an animal's behavioral responses to Hebbian associative learning in the cerebral cortex requires some changes in the Hebbian learning rules. The participation of the brain monoaminergic systems in Hebbian associative learning is considered.


2020 ◽  
Vol 6 (1) ◽  
pp. 103-111 ◽  
Author(s):  
Yosef Avchalumov ◽  
Chitra D. Mandyam

Alcohol is one of the oldest pharmacological agents used for its sedative/hypnotic effects, and alcohol abuse and alcohol use disorder (AUD) continues to be major public health issue. AUD is strongly indicated to be a brain disorder, and the molecular and cellular mechanism/s by which alcohol produces its effects in the brain are only now beginning to be understood. In the brain, synaptic plasticity or strengthening or weakening of synapses, can be enhanced or reduced by a variety of stimulation paradigms. Synaptic plasticity is thought to be responsible for important processes involved in the cellular mechanisms of learning and memory. Long-term potentiation (LTP) is a form of synaptic plasticity, and occurs via N-methyl-D-aspartate type glutamate receptor (NMDAR or GluN) dependent and independent mechanisms. In particular, NMDARs are a major target of alcohol, and are implicated in different types of learning and memory. Therefore, understanding the effect of alcohol on synaptic plasticity and transmission mediated by glutamatergic signaling is becoming important, and this will help us understand the significant contribution of the glutamatergic system in AUD. In the first part of this review, we will briefly discuss the mechanisms underlying long term synaptic plasticity in the dorsal striatum, neocortex and the hippocampus. In the second part we will discuss how alcohol (ethanol, EtOH) can modulate long term synaptic plasticity in these three brain regions, mainly from neurophysiological and electrophysiological studies. Taken together, understanding the mechanism(s) underlying alcohol induced changes in brain function may lead to the development of more effective therapeutic agents to reduce AUDs.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Falah Y. H. Ahmed ◽  
Siti Mariyam Shamsuddin ◽  
Siti Zaiton Mohd Hashim

A spiking neurons network encodes information in the timing of individual spike times. A novel supervised learning rule for SpikeProp is derived to overcome the discontinuities introduced by the spiking thresholding. This algorithm is based on an error-backpropagation learning rule suited for supervised learning of spiking neurons that use exact spike time coding. The SpikeProp is able to demonstrate the spiking neurons that can perform complex nonlinear classification in fast temporal coding. This study proposes enhancements of SpikeProp learning algorithm for supervised training of spiking networks which can deal with complex patterns. The proposed methods include the SpikeProp particle swarm optimization (PSO) and angle driven dependency learning rate. These methods are presented to SpikeProp network for multilayer learning enhancement and weights optimization. Input and output patterns are encoded as spike trains of precisely timed spikes, and the network learns to transform the input trains into target output trains. With these enhancements, our proposed methods outperformed other conventional neural network architectures.


2004 ◽  
Vol 14 (01) ◽  
pp. 1-8 ◽  
Author(s):  
RALF MÖLLER

The paper reviews single-neuron learning rules for minor component analysis and suggests a novel minor component learning rule. In this rule, the weight vector length is self-stabilizing, i.e., moving towards unit length in each learning step. In simulations with low- and medium-dimensional data, the performance of the novel learning rule is compared with previously suggested rules.


2017 ◽  
Vol 27 (03) ◽  
pp. 1750002 ◽  
Author(s):  
Lilin Guo ◽  
Zhenzhong Wang ◽  
Mercedes Cabrerizo ◽  
Malek Adjouadi

This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.


2011 ◽  
Vol 23 (12) ◽  
pp. 3145-3161 ◽  
Author(s):  
Jian K. Liu

It has been established that homeostatic synaptic scaling plasticity can maintain neural network activity in a stable regime. However, the underlying learning rule for this mechanism is still unclear. Whether it is dependent on the presynaptic site remains a topic of debate. Here we focus on two forms of learning rules: traditional synaptic scaling (SS) without presynaptic effect and presynaptic-dependent synaptic scaling (PSD). Analysis of the synaptic matrices reveals that transition matrices between consecutive synaptic matrices are distinct: they are diagonal and linear to neural activity under SS, but become nondiagonal and nonlinear under PSD. These differences produce different dynamics in recurrent neural networks. Numerical simulations show that network dynamics are stable under PSD but not SS, which suggests that PSD is a better form to describe homeostatic synaptic scaling plasticity. Matrix analysis used in the study may provide a novel way to examine the stability of learning dynamics.


2013 ◽  
Vol 25 (6) ◽  
pp. 1472-1511 ◽  
Author(s):  
Yan Xu ◽  
Xiaoqin Zeng ◽  
Shuiming Zhong

The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.


Sign in / Sign up

Export Citation Format

Share Document