A simple spiking neuron model based on stochastic STDP

pascal.helson@inria.fr

Synaptic plasticity refers to a change of neuron's morphology, sensitivity and reactivity. Such a plastic behaviour is thought to be at the basis of memory formation which makes it particularly interesting. Nowadays, popular plasticity models are based on Spike-Timing-Dependent-Plasticity (STDP) rules. In STDP models spikes timings are central, but current works showed that firing rate, membrane potential, neuromodulators and many other factors affect synaptic plasticity. These numerous perturbations partially explain the stochastic behaviour of neurons and plasticity. Another important remark about plasticity is the presence of different time scales. Indeed, long term plasticity time scale ranges from minutes to more than one hour. On the other hand, a spike lasts for 1 millisecond. Thus, there is a need to understand how to bridge this time scale gap at the synapse level and how it interplays with noise.

Here, we would like to present a new model of stochastic plasticity in networks of spiking neurons described by 2 states Markov chains. The non-plastic network is rich enough to be realistic: it reproduces phenomena which have been widely observed by biologists. For example, spontaneous oscillations, bi-stability and different time scales. In addition, it is simple enough to be mathematically analyzed and numerically simulated. The most original point of our study concerns the introduction of a new STDP rule which we implement in the well-known stochastic Wilson-Cowan model of spiking neurons. More precisely, because of the plasticity rule, our model is a piecewise deterministic Markov process. In the context of long term plasticity, synaptic weights dynamic is much slower than the network one, a time scale analysis enables us to remove the neurons dynamics from the equations. Indeed, this allows us to derive an equation, for the slow weight dynamic alone, in which neurons dynamics are replaced by their stationary distributions. Thereby, we don't need to simulate the dynamics of thousands fast neurons any more and we get an equation much simpler to analyze. We then discuss the implications of such derivation for learning and adaptation in neural networks.