spike train
Recently Published Documents


TOTAL DOCUMENTS

566
(FIVE YEARS 55)

H-INDEX

51
(FIVE YEARS 4)

2021 ◽  
Vol 15 ◽  
Author(s):  
Stefan Dasbach ◽  
Tom Tetzlaff ◽  
Markus Diesmann ◽  
Johanna Senk

The representation of the natural-density, heterogeneous connectivity of neuronal network models at relevant spatial scales remains a challenge for Computational Neuroscience and Neuromorphic Computing. In particular, the memory demands imposed by the vast number of synapses in brain-scale network simulations constitute a major obstacle. Limiting the number resolution of synaptic weights appears to be a natural strategy to reduce memory and compute load. In this study, we investigate the effects of a limited synaptic-weight resolution on the dynamics of recurrent spiking neuronal networks resembling local cortical circuits and develop strategies for minimizing deviations from the dynamics of networks with high-resolution synaptic weights. We mimic the effect of a limited synaptic weight resolution by replacing normally distributed synaptic weights with weights drawn from a discrete distribution, and compare the resulting statistics characterizing firing rates, spike-train irregularity, and correlation coefficients with the reference solution. We show that a naive discretization of synaptic weights generally leads to a distortion of the spike-train statistics. If the weights are discretized such that the mean and the variance of the total synaptic input currents are preserved, the firing statistics remain unaffected for the types of networks considered in this study. For networks with sufficiently heterogeneous in-degrees, the firing statistics can be preserved even if all synaptic weights are replaced by the mean of the weight distribution. We conclude that even for simple networks with non-plastic neurons and synapses, a discretization of synaptic weights can lead to substantial deviations in the firing statistics unless the discretization is performed with care and guided by a rigorous validation process. For the network model used in this study, the synaptic weights can be replaced by low-resolution weights without affecting its macroscopic dynamical characteristics, thereby saving substantial amounts of memory.


2021 ◽  
Author(s):  
Giuseppe de Alteriis ◽  
Enrico Cataldo ◽  
Alberto Mazzoni ◽  
Calogero Maria Oddo

The Izhikevich artificial spiking neuron model is among the most employed models in neuromorphic engineering and computational neuroscience, due to the affordable computational effort to discretize it and its biological plausibility. It has been adopted also for applications with limited computational resources in embedded systems. It is important therefore to realize a compromise between error and computational expense to solve numerically the model's equations. Here we investigate the effects of discretization and we study the solver that realizes the best compromise between accuracy and computational cost, given an available amount of Floating Point Operations per Second (FLOPS). We considered three frequently used fixed step Ordinary Differential Equations (ODE) solvers in computational neuroscience: Euler method, the Runge-Kutta 2 method and the Runge-Kutta 4 method. To quantify the error produced by the solvers, we used the Victor Purpura spike train Distance from an ideal solution of the ODE. Counterintuitively, we found that simple methods such as Euler and Runge Kutta 2 can outperform more complex ones (i.e. Runge Kutta 4) in the numerical solution of the Izhikevich model if the same FLOPS are allocated in the comparison. Moreover, we quantified the neuron rest time (with input under threshold resulting in no output spikes) necessary for the numerical solution to converge to the ideal solution and therefore to cancel the error accumulated during the spike train; in this analysis we found that the required rest time is independent from the firing rate and the spike train duration. Our results can generalize in a straightforward manner to other spiking neuron models and provide a systematic analysis of fixed step neural ODE solvers towards a trade-off between accuracy in the discretization and computational cost.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258321
Author(s):  
Mehrad Sarmashghi ◽  
Shantanu P. Jadhav ◽  
Uri Eden

Point process generalized linear models (GLMs) provide a powerful tool for characterizing the coding properties of neural populations. Spline basis functions are often used in point process GLMs, when the relationship between the spiking and driving signals are nonlinear, but common choices for the structure of these spline bases often lead to loss of statistical power and numerical instability when the signals that influence spiking are bounded above or below. In particular, history dependent spike train models often suffer these issues at times immediately following a previous spike. This can make inferences related to refractoriness and bursting activity more challenging. Here, we propose a modified set of spline basis functions that assumes a flat derivative at the endpoints and show that this limits the uncertainty and numerical issues associated with cardinal splines. We illustrate the application of this modified basis to the problem of simultaneously estimating the place field and history dependent properties of a set of neurons from the CA1 region of rat hippocampus, and compare it with the other commonly used basis functions. We have made code available in MATLAB to implement spike train regression using these modified basis functions.


2021 ◽  
pp. 389-393
Author(s):  
Matjaž Divjak ◽  
Lukas G. Wiedemann ◽  
Andrew J. McDaid ◽  
A. Holobar

Author(s):  
Mark M. Iskarous ◽  
Sriramana Sankar ◽  
Qianwei Li ◽  
Christopher L. Hunt ◽  
Nitish V. Thakor

Sign in / Sign up

Export Citation Format

Share Document