A Spike-Timing Based Integrated Model for Pattern Recognition

Author(s):  
Qiang Yu ◽  
Huajin Tang ◽  
Jun Hu ◽  
Kay Chen Tan
2013 ◽  
Vol 25 (2) ◽  
pp. 450-472 ◽  
Author(s):  
Jun Hu ◽  
Huajin Tang ◽  
K. C. Tan ◽  
Haizhou Li ◽  
Luping Shi

During the past few decades, remarkable progress has been made in solving pattern recognition problems using networks of spiking neurons. However, the issue of pattern recognition involving computational process from sensory encoding to synaptic learning remains underexplored, as most existing models or algorithms target only part of the computational process. Furthermore, many learning algorithms proposed in the literature neglect or pay little attention to sensory information encoding, which makes them incompatible with neural-realistic sensory signals encoded from real-world stimuli. By treating sensory coding and learning as a systematic process, we attempt to build an integrated model based on spiking neural networks (SNNs), which performs sensory neural encoding and supervised learning with precisely timed sequences of spikes. With emerging evidence of precise spike-timing neural activities, the view that information is represented by explicit firing times of action potentials rather than mean firing rates has been receiving increasing attention. The external sensory stimulation is first converted into spatiotemporal patterns using a latency-phase encoding method and subsequently transmitted to the consecutive network for learning. Spiking neurons are trained to reproduce target signals encoded with precisely timed spikes. We show that when a supervised spike-timing-based learning is used, different spatiotemporal patterns are recognized by different spike patterns with a high time precision in milliseconds.


2019 ◽  
Vol 29 (08) ◽  
pp. 1850059 ◽  
Author(s):  
Marie Bernert ◽  
Blaise Yvert

Bio-inspired computing using artificial spiking neural networks promises performances outperforming currently available computational approaches. Yet, the number of applications of such networks remains limited due to the absence of generic training procedures for complex pattern recognition, which require the design of dedicated architectures for each situation. We developed a spike-timing-dependent plasticity (STDP) spiking neural network (SSN) to address spike-sorting, a central pattern recognition problem in neuroscience. This network is designed to process an extracellular neural signal in an online and unsupervised fashion. The signal stream is continuously fed to the network and processed through several layers to output spike trains matching the truth after a short learning period requiring only few data. The network features an attention mechanism to handle the scarcity of action potential occurrences in the signal, and a threshold adaptation mechanism to handle patterns with different sizes. This method outperforms two existing spike-sorting algorithms at low signal-to-noise ratio (SNR) and can be adapted to process several channels simultaneously in the case of tetrode recordings. Such attention-based STDP network applied to spike-sorting opens perspectives to embed neuromorphic processing of neural data in future brain implants.


2015 ◽  
Vol 27 (3) ◽  
pp. 561-593 ◽  
Author(s):  
Himanshu Akolkar ◽  
Cedric Meyer ◽  
Xavier Clady ◽  
Olivier Marre ◽  
Chiara Bartolozzi ◽  
...  

This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time—pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30–60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.


Author(s):  
Arkadij Zakrevskij

The theory of Boolean functions, especially in respect to representing these functions in the disjunctive or conjunctive normal forms, is extended in this chapter onto the case of finite predicates. Finite predicates are decomposed by that into some binary units, which will correspond to components of Boolean vectors and matrices and are represented as combinations of these units. Further, the main concepts used for solving pattern recognition problems are defined, namely world model, data, and knowledge. The data presenting information about the existence of some objects with definite combinations of properties is considered, as well as the knowledge presenting information about the existence of regular relationships between attributes. These relationships prohibit some combinations of properties. In this way, the knowledge gives the information about the non-existence of objects with some definite (prohibited) combinations of attribute values. A special form of regularity representation, called implicative regularities, is introduced. Any implicative regularity generates an empty interval in the Boolean space of object descriptions, which do not contradict the data. The problem of plausibility evaluation of induced implicative regularities should be solved by that. The pattern recognition problem is solved by two steps. First, regularities are extracted from the database (inductive inference); second, the obtained knowledge is used for the object recognition (deductive inference).


2016 ◽  
Vol 26 (05) ◽  
pp. 1650020 ◽  
Author(s):  
Jesús A. Garrido ◽  
Niceto R. Luque ◽  
Silvia Tolu ◽  
Egidio D’Angelo

The majority of operations carried out by the brain require learning complex signal patterns for future recognition, retrieval and reuse. Although learning is thought to depend on multiple forms of long-term synaptic plasticity, the way this latter contributes to pattern recognition is still poorly understood. Here, we have used a simple model of afferent excitatory neurons and interneurons with lateral inhibition, reproducing a network topology found in many brain areas from the cerebellum to cortical columns. When endowed with spike-timing dependent plasticity (STDP) at the excitatory input synapses and at the inhibitory interneuron–interneuron synapses, the interneurons rapidly learned complex input patterns. Interestingly, induction of plasticity required that the network be entrained into theta-frequency band oscillations, setting the internal phase-reference required to drive STDP. Inhibitory plasticity effectively distributed multiple patterns among available interneurons, thus allowing the simultaneous detection of multiple overlapping patterns. The addition of plasticity in intrinsic excitability made the system more robust allowing self-adjustment and rescaling in response to a broad range of input patterns. The combination of plasticity in lateral inhibitory connections and homeostatic mechanisms in the inhibitory interneurons optimized mutual information (MI) transfer. The storage of multiple complex patterns in plastic interneuron networks could be critical for the generation of sparse representations of information in excitatory neuron populations falling under their control.


Sign in / Sign up

Export Citation Format

Share Document