Welcome to The Neuromorphic Engineer
Technologies » Digital

Multiplier-less reconfigurable architectures for spiking neural networks

PDF version | Permalink

Arfan Ghani

11 June 2008

Large numbers of neurons can be implemented on fully-parallel reconfigurable platforms such as FPGAs.

Recent advances in the area of reconfigurable hardware (such as field-programmable gate arrays, FPGAs) have made it possible to implement complex systems in comparatively little time. One of the drawbacks of implementing neural systems on reconfigurable platforms is that the neurons take up much more device area than custom hardware. Thus, the number of spiking neurons that can operate on a single device is restricted. To mitigate this, we propose area-efficient multiplier-less reconfigurable architectures for the large-scale implementation of integrate-and-fire and leaky-integrate-and-fire neurons. The rationale behind this investigation is to develop a compact reconfigurable cell in the context of so-called Liquid State Machines1 to model several distributed columns2 and solve problems such as speech recognition.3

Biological neurons process their information through short pulses called spikes. Here, each spike contributes into the membrane potential by building up synaptic strengths. The most biologically plausible models, such as the Hodgkin-Huxley, are not well suited to hardware implementation as they are expensive in terms of the computational resources required. Therefore, other simplified models, such as integrate and fire (IF) or leaky integrate and fire (LIF),4,5 are generally used instead.

In an IF neuron model, stimuli are integrated over time and a spike is generated if it surpasses a certain threshold. Mathematically, this model can be expressed as:

Where u is the membrane potential and τm is the membrane time constant of the neuron.

Multiplication is one of the most common and critical operations in many computational systems and is one of the bottlenecks in implementing large-scale spiking neural networks. Low-cost FPGA devices don't have dedicated multiplier units on chip, while high-end FPGA devices have limited numbers of dedicated multipliers: significantly fewer than the number of neurons that can be implemented. Since the total numbers of embedded multipliers available on a particular device are limited, the maximum number of synapses will be restricted to the maximum number of embedded multipliers. In case of large networks, therefore, available logic has to be exploited to implement multipliers, seriously restricting the maximum number of synapses and neurons that can be implemented on a target device.


(Top) The logic required to implement synapses. (Bottom) The circuitry required to implement membranes.

The proposed approach to the implementation of spiking neurons uses a scheme where multipliers are replaced with spike counters. The weight vector for each synaptic connection can be modified: in this case, where learning is done off-line, they can be considered constant. An accumulator is used in order to accumulate incoming synaptic values for membrane potential. As shown in Figure 1, spikes are produced by the spike generator and the total strength of the synapse is determined by the synaptic weight: once the desired synaptic strength is achieved, an excitatory output pulse is generated (for an excitatory synapse). This is very flexible in that both single and multiple spikes can be calculated. The neuron's membrane is modeled using a simple accumulator and a comparator. An output spike is generated if the potential exceeds a threshold value and the spike is then transmitted to other neurons in the network. This makes it feasible to implement large numbers of relatively simple spiking neurons.6 The proposed model is composed of two sub-units: the synapse and soma membrane. The synapse model is shown at the top of Figure 1 and the soma (neuron) at the bottom. The hardware simulations are shown in Figure 2.


Bottom plot shows the input spikes accumulated in the membrane (middle plot) and an output spike generated when the membrane potential exceeds a threshold value (top plot).

Perfect integrator neuron models such as integrate-and-fire are less biologically plausible than others because they do not include a leakage current that makes the neuron return to its resting potential in the absence of a stimulus. The leakage property is important for learning which exhibits a phenomenon of short-term memory.4,7 In LIF neurons, the membrane potential increases with the excitatory incoming synapses and—in the absence of pre-synaptic spikes—the membrane potential decays and the model becomes a simple homogenous first-order differential equation. The solution to this equation can be expressed as:

In the absence of an external input, the membrane potential decays exponentially on a time scale τ towards the resting potential hrest. An incoming input spike may change the behaviour of the membrane based on the synaptic strength and an excitatory input spike will slow down the activation decay. The behaviour of a leaky integrate-and-fire neuron is shown in Figure 4. The reconfigurable architectures to emulate the behavior of IF and LIF neuron models are shown in figure 3.


Shown are the architectures of the integrate-and-fire (top) and leaky-integrate-and-fire (bottom) neurons.


Input spikes are accumulated and an output spike is generated after a certain threshold is reached. The membrane decays gradually in the absence of input spikes.

The design was synthesised with the Xilinx ISE design suite, implemented on Virtex II Pro device (xc2vp50), and its resource requirements were calculated. In a linear comparison, almost 2×103 synapses and 1.2×103 neurons can be implemented with the proposed architecture. By using state-of-the-art device such as Virtex-5, which has 331,775 logic cells and 51,840 slices, can fit almost 4.3×103 synapses and 2.5×103 fully parallel neurons. The proposed design has successfully been used for a speech recognition application in the context of Liquid State Machines and the next step is to model multiple cortical columns for multiple input classifications (sensory fusion).




Author

Arfan Ghani
Intelligent Systems Research Centre, University of Ulster

Arfan Ghani has a Master of Science degree in Computer Systems Engineering from the Technical University of Denmark and has worked with M/S Vitesse Semiconductors and M/S Microsoft (Denmark), Intel Research, University of Cambridge, and Siemens (Pakistan). He is currently a Doctoral Candidate at the University of Ulster, UK, and is interested in the areas of neuro-inspired computing, VLSI system design, intelligent signal processing, and reconfigurable computing.


References
  1. W. Maass, T. Natschlaeger and H. Markram, Real-time computing without stable states: A new framework for neural computation based on perturbations, Neural Computation 14 (11), pp. 2, 2002.

  2. V. B. Mountcastle, An organizing principle for cerebral function: the unit model and the distributed system, The Mindful Brain, MIT press, Cambridge, MA, 1978.

  3. A. Ghani, T. M. McGinnity, L. P. Maguire and J. G. Harkin, Neuro-inspired speech recognition with recurrent spiking neurons, ICANN, Springer-Verlag, 2008. Accepted for publication

  4. W. Maass, Networks of spiking neurons: The third generation of neural network models, Neural Networks 10 (9), pp. 1, 1997.

  5. W. Maass and C. Bishop, Pulsed Neural Networks, The MIT Press, Massachusetts, 1999.

  6. A. Ghani, T. M. McGinnity, L. P. Maguire and J. G. Harkin, Area efficient architecture for large scale implementation of biologically plausible neural networks on reconfigurable hardware, Field Programmable Logic and Applications (FPL), Spain, 2006.

  7. W. Maass, T. Natschläger and H. Markram, Synapses as dynamic memory buffers, Neural Networks 15, pp. 155-161, 2002.


 
DOI:  10.2417/1200806.0055




Tell us what to cover!

If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.