Welcome to The Neuromorphic Engineer
Methods » Learning

Supervised learning in spiking neural networks

PDF version | Permalink

Filip Ponulak

1 March 2007

A learning method called ReSuMe allows fast convergence and optimal solution stability.

Spiking neural networks (SNNs)1–5 exhibit interesting properties that make them particularly suitable for applications that require fast and efficient computation and where the timing of input/output signals carries important information. However, the use of such networks in practical, goal-oriented applications has long been limited by the lack of appropriate supervised-learning methods. Recently, several approaches for learning in SNNs have been proposed.3 Here, we will focus on one called ReSuMe2,6 (remote supervised method) that corresponds to the Widrow-Hoff rule and is well known from traditional artificial neural networks. ReSuMe takes advantage of spike-based plasticity mechanisms similar to spike-timing dependent plasticity (STDP).1,6 Its learning rule is defined by the equation below:

where Sd(t), Sin(t) and So(t) are the desired pre-and postsynaptic spike trains,1 respectively. The constant a represents the so-called non-Hebbian contribution to the weight changes. The role of this parameter is to adjust the average strength of the synaptic inputs so as to impose on a neuron the desired level of activity (desired mean firing rate). The function W(s) is known as a learning window1 and, in ReSuMe, its shapes of are similar to those used in STDP models. The parameter s is a time delay between the correlated spikes. (For a detailed introduction to ReSuMe, please see Reference 6.)

It has been demonstrated that ReSuMe enables effective learning of complex temporal and spatio-temporal spike patterns with a given accuracy (see Figure 1) and that the method enables us to impose desired input/output properties on the networks.2,7 Contrary to most existing supervised learning methods in SNN, ReSuMe is independent of the models and can effectively be applied to the broad class of spiking neurons.2,8 Convergence of the ReSuMe learning process has been formally proved for some classes of the learning scenarios.8

ReSuMe is used to train a spiking neural network to store and recall an exemplary target spatio-temporal pattern of spikes (gray bars). The spike-trains produced at the network outputs (black bars) before the training (A) and after 5, 10 or 15 learning epochs (B,C,D, respectively) are shown. After 15 learning epochs the target pattern is almost perfectly reproduced, with the correlation equal to 0.998.

In Reference 7 we demonstrated the generalization properties of the spiking neurons trained with ReSuMe. It was also shown that SNNs are able to perform function approximation tasks. Moreover, we demonstrated that, by appropriately setting the learning rule parameters, networks can be trained to reproduce desired spiking patterns Sd(t) with a controllable time lag δ t, such that the reproduced signal So(t)≅Sd(t−Δt) (unpublished results). This property has very important outcomes for the possible applications of ReSuMe: e.g. in prediction tasks, where SNN-based adaptive models could predict the behaviour of reference objects in on-line mode.

Especially promising applications of SNN are in neuroprostheses for human patients with the dysfunctions of the visual, auditory, or neuro-muscular systems. Our initial simulations in this area point out the suitability of ReSuMe as a training method for SNN-based neurocontrollers in movement generation and control tasks.9–11

Real-life applications of SNN require efficient hardware implementations of the spiking models and the learning methods. Recently, ReSuMe was tested on an FPGA platform:4 the implemented system demonstrated fast learning convergence and the stability of the optimal solutions obtained. Due to its very fast processing ability, the system is able to meet the time restrictions of many real-time tasks.


Filip Ponulak
Institute of Control and Information Engineering, Posnan University of Technology

  1. W. Gerstner and W. Kistler, Spiking Neuron Models. Single Neurons, Populations, Plasticity, Cambridge University Press, 2002.

  2. A. Kasinski and F. Ponulak, Experimental Demonstration of Learning Properties of a New Supervised Learning Method for the Spiking Neural Networks, 15th Int'l Conf. on Artificial Neural Networks: Biological Inspirations 3696, pp. 145-153, 2005.

  3. A. Kasinski and F. Ponulak, Comparison of Supervised Learning Methods for Spike Time Coding in Spiking Neural Networks, Int'l J. of App.Math. and Comp. Sci. 16 (1), pp. 101-113, 2006.

  4. M. Kraft, A. Kasinski and R Ponulak, Design of the spiking neuron having learning capabilities based on FPGA circuits, Third Int'l IF AC Workshop on Discrete-Event System Design, pp. 301-306, 2006.

  5. W. Maass, Networks of spiking neurons: The third generation of neural network models, Neural Networks 10 (9), pp. 1659-1671, 1997.

  6. R. Ponulak, ReSuMe?new supervised learning method for Spiking Neural Networks, Technical Report, 2005. http://d1.cie.put.poznan.pl/~fp/

  7. F. Ponulak and A. Kasinski, Generalization Properties of SNN Trained with ReSuMe, Euro. Symp. on Artificial Neural Networks, pp. 623-629, 2006.

  8. R. Ponulak, ReSuMe?Proof of convergence, Technical Report, 2006. http://d1.cie.put.poznan.pl/~fp/

  9. F. Ponulak, D. Belter and A. Kasinski, Adaptive Central Pattern Generator based on Spiking Neural Networks, Dynamical principles for neuroscience and intelligent biomimetic devices, EPFL LATSIS Symp., pp. 121-122, 2006.

  10. F. Ponulak and A. Kasinski, A novel approach towards movement control with Spiking Neural Networks, Third Int'l Symp. on Adaptive Motion in Animals and Machines, 2005. (Abstract.)

  11. F. Ponulak and A. Kasinski, ReSuMe learning method for Spiking Neural Networks dedicated to neuroprostheses control: Dynamical principles for neuroscience and intelligent biomimetic devices, EPFL LATSIS Symp., pp. 119-120, 2006.

DOI:  10.2417/1200703.0045


Tell us what to cover!

If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the editor and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.