Welcome to The Neuromorphic Engineer | ||||
|
Methods » Learning
Supervised learning in spiking neural networks PDF version | Permalink Spiking neural networks (SNNs)1–5 exhibit interesting properties that make them particularly suitable for applications that require fast and efficient computation and where the timing of input/output signals carries important information. However, the use of such networks in practical, goal-oriented applications has long been limited by the lack of appropriate supervised-learning methods. Recently, several approaches for learning in SNNs have been proposed.3 Here, we will focus on one called ReSuMe2,6 (remote supervised method) that corresponds to the Widrow-Hoff rule and is well known from traditional artificial neural networks. ReSuMe takes advantage of spike-based plasticity mechanisms similar to spike-timing dependent plasticity (STDP).1,6 Its learning rule is defined by the equation below: where Sd(t), Sin(t) and So(t) are the desired pre-and postsynaptic spike trains,1 respectively. The constant a represents the so-called non-Hebbian contribution to the weight changes. The role of this parameter is to adjust the average strength of the synaptic inputs so as to impose on a neuron the desired level of activity (desired mean firing rate). The function W(s) is known as a learning window1 and, in ReSuMe, its shapes of are similar to those used in STDP models. The parameter s is a time delay between the correlated spikes. (For a detailed introduction to ReSuMe, please see Reference 6.)It has been demonstrated that ReSuMe enables effective learning of complex temporal and spatio-temporal spike patterns with a given accuracy (see Figure 1) and that the method enables us to impose desired input/output properties on the networks.2,7 Contrary to most existing supervised learning methods in SNN, ReSuMe is independent of the models and can effectively be applied to the broad class of spiking neurons.2,8 Convergence of the ReSuMe learning process has been formally proved for some classes of the learning scenarios.8 Figure 1. ReSuMe is used to train a spiking neural network to store and recall an exemplary target spatio-temporal pattern of spikes (gray bars). The spike-trains produced at the network outputs (black bars) before the training (A) and after 5, 10 or 15 learning epochs (B,C,D, respectively) are shown. After 15 learning epochs the target pattern is almost perfectly reproduced, with the correlation equal to 0.998. In Reference 7 we demonstrated the generalization properties of the spiking neurons trained with ReSuMe. It was also shown that SNNs are able to perform function approximation tasks. Moreover, we demonstrated that, by appropriately setting the learning rule parameters, networks can be trained to reproduce desired spiking patterns Sd(t) with a controllable time lag δ t, such that the reproduced signal So(t)≅Sd(t−Δt) (unpublished results). This property has very important outcomes for the possible applications of ReSuMe: e.g. in prediction tasks, where SNN-based adaptive models could predict the behaviour of reference objects in on-line mode. Especially promising applications of SNN are in neuroprostheses for human patients with the dysfunctions of the visual, auditory, or neuro-muscular systems. Our initial simulations in this area point out the suitability of ReSuMe as a training method for SNN-based neurocontrollers in movement generation and control tasks.9–11 Real-life applications of SNN require efficient hardware implementations of the spiking models and the learning methods. Recently, ReSuMe was tested on an FPGA platform:4 the implemented system demonstrated fast learning convergence and the stability of the optimal solutions obtained. Due to its very fast processing ability, the system is able to meet the time restrictions of many real-time tasks. http://d1.cie.put.poznan.pl/?fp References
|
Tell us what to cover!
If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.
|
||
|