Welcome to The Neuromorphic Engineer
Biological Models » Plasticity

Using neuron dynamics for realistic synaptic learning

PDF version | Permalink

Christian Mayr, Johannes Partzsch, Marko Noack, and Rene Schueffny

20 December 2010

Co-developing neuromorphic integrated circuit learning models and derivations significantly increases biological accuracy and reduces circuit complexity.

Computational tasks such as object and pattern recognition rely on deterministic learning in the brain carried out mostly at the synapses, which link the brain's neurons and shape the overall computational function of a group of neurons.1 Modelers build mathematical abstractions of synaptic learning by approximating the behavior of biological synapses as they try to copy relevant processing functions.2–4 Neuromorphic integrated circuits (ICs) implement transistor-based versions of these mathematical abstractions in order to realize adaptive, error-tolerant emulations of cognitive functions.5,6

In recent years, there has been a steady increase in the size of neuromorphic systems in order to handle more advanced cognitive tasks. This calls for area-efficient circuit implementations, especially of synapses, since biology-derived topologies use far more synapses than neurons,7 which makes synapse size the determining factor in overall IC complexity.6,8,9 While reducing synapse size, ideally the biological accuracy of the synapses' learning function should increase to keep pace with the biologists' and modelers' continuously refined understanding of cognitive functions.10 To date, these conflicting demands on the learning circuit's implementation have not received much attention. In particular, the usual two-step approach of deriving a mathematical model and subsequently building circuits for it tends to yield very complex circuits.5,8 However, co-developing both the mathematical model and circuit implementation could balance both objectives, resulting in a circuit-optimized mathematical model that also exhibits good biological accuracy.

A synapse composes its learning function from the neurons' local-state variables.1But most models of synaptic learning introduce synthetic dynamical variables driven by higher order information such as spike timings.2,3,11 From a hardware perspective, implementing dynamical behavior—especially configurable time constants—consumes area. As a result, using local waveforms in the synaptic learning circuit would both closely mimic the biology and save area-expensive circuitry.

Our learning rule, called local-correlation plasticity (LCP), takes this approach, measuring the relationship between the activities of the triggering (presynaptic) and the receiving (postsynaptic) neurons of a synapse.12The activity of the postsynaptic neuron is naturally included in its membrane voltage u(t), which also has great influence on learning.13,14 The presynaptic neuron's activity is visible to the synapse by a conductance change g(t) in the receiving neuron. Multiplying both variables results in the synaptic strength w12

in which B is a scaling constant and a threshold Θu determines the cross-over between weight increase and decrease. Figure 1 illustrates the operation of the LCP rule. While u(t) determines the direction of the weight change, g(t) corresponds to a gating amplifier. Synaptic learning is only possible with presynaptic activity.


Principal operation of the local-correlation plasticity (LCP) learning rule, with conductance change g(t), membrane voltage u(t), and resulting synaptic weight (w) over the progress of time (t).

With our LCP learning rule, the number of dynamical variables can be minimized in the system design. The membrane voltage u(t) is generated by the neuron circuit and fed back to all connected synapses. Similarly, the waveform of the synaptic input g(t) needs to be generated only once per sending neuron. Thus, all waveform-generation circuits are separated from the individual synapse. This allows for a compact synapse circuit that essentially consists of a differential pair carrying out the difference computation and multiplication of equation (1).15 The waveform-sharing by the synapses fits nicely into the matrix structure used in neuromorphic chip design6,8 (see Figure 2). Each synapse row connects to one postsynaptic neuron. A synapse column is driven by two input waveforms from which each synapse chooses one waveform.9This architecture provides sufficient flexibility for mapping typical network models9 while maximally reusing waveform-generation circuitry, thus making for an area-saving design. However, we wondered whether this approach could account for modern learning paradigms.


Optimized system architecture with extended configurability. Each synapse is able to choose between an externally driven presynaptic conductance change g(t)ext, i, and one generated by internal feedback g(t)int, i through an external/internal select bit. The presynaptic signals g(t)i are scaled by the synaptic weight wij, and their sum is fed to the respective postsynaptic neurons. U(t): membrane voltage (time).

One of the most influential models for synaptic learning, spike-timing-dependent plasticity (STDP),11 accounts for the synaptic weight increase and decrease observed when applying a single spike of the pre- and postsynaptic neuron at fixed time intervals to the synapse.16 In the LCP rule, the timing dependencies arise naturally from the correlation between postsynaptic voltage and presynaptic conductance change, that is, the time windows are inherent in u(t) and g(t) (compare Figure 1). Figure 3 shows that a circuit implementation of the LCP rule,15 despite not being conceived specifically for STDP reproduction, can approximate the typical STDP window. It is also evident from Figure 3 that even, for Three Sigma manufacturing deviations, the STDP window only changes in its scaling, not its typical shape. This shows that our complexity-reducing approach results in a very robust learning implementation.


Simulation results for a spike-timing-dependent (STDP) protocol applied to a circuit realization of the LCP rule. The weight change is plotted over the time difference between an occurrence of a postsynaptic spike tpostand a presynaptic spike tpre. Even for the complete range of statistical manifestations of the circuit parameters, the learning-time window is robustly replicated.

Recent computational analysis indicates that learning at the synapse is significantly more involved than a simple STDP dependency, so a generic third variable needed to be postulated.10 Experimental evidence also supports this postulate, with learning dependent on more complex spike orders,2 on spike rate,14 or on alterations of the postsynaptic membrane voltage.13,14 Figure 4 shows that the circuit realization of the LCP rule can replicate such a third variable. In this example, LCP replicates a pulse-triplet dependency with a high degree of biological veracity,2 even though the single synapses are much simpler than typical STDP implementations.5,8


Simulation results for a triplet protocol applied to the LDP rule circuit. The weight change is plotted as red/blue over the timing difference t1 between the first triplet pulse and the middle pulse, and the difference t2between the middle triplet pulse and the last pulse (see also insets on both sides). The triplet with two pre(synaptic) and one post(synaptic) spike is valid above the main diagonal. Below it, the triplet consists of two post and one pre pulse.

The synaptic learning expressed in our LCP rule combines high robustness and area efficiency in circuit terms with a biology-driven mathematical formulation. The LCP rule and its circuit realization can reproduce a wide range of complex biological learning phenomena4,15 on a par with state-of-the-art, biology-oriented learning rules.4,17 The circuit realization of the LCP rule is thus well suited to building large-scale neuromorphic systems with a much higher degree of biological accuracy than current approaches. We are involved with several neuromorphic hardware projects that provide an ideal substrate for applying the LCP rule to adaptive closed-loop interfaces to biological neural tissue or to biology-derived, very large scale integration information processing. Our future work will continue the co-development approach, especially modulation of the learning function (metaplasticity) of the LCP rule4 in the circuit realization.




Authors

Christian Mayr
University of Technology Dresden

Johannes Partzsch
University of Technology Dresden

Marko Noack
University of Technology Dresden

Rene Schueffny
University of Technology Dresden


References
  1. C. Koch, Biophysics of Computation. Information Processing in Single Neurons Series Computational Neuroscience , Oxford University Press, 1999.

  2. R. Froemke and Y. Dan, Spike-timing-dependent synaptic modification induced by natural spike trains, Nature 416, pp. 433-438, 2002.

  3. J. Brader, W. Senn and S. Fusi, Learning real-world stimuli in a neural network with spike-driven synaptic dynamics, Neural Computation 19, pp. 2, 2007.

  4. C. Mayr and J. Partzsch, Rate and pulse-based plasticity governed by local synaptic state variables, Frontiers in Synaptic Neurosci. 2 (33), pp. 28, 2010.

  5. T. Koickal, A. Hamilton, S. Tan, J. Covington, J. Gardner and T. Pearce, Analog VLSI circuit implementation of an adaptive neuromorphic olfaction chip, IEEE Trans. on Circuits and Syst. I: Regular Papers 54 (1), pp. 60-73, 2007.

  6. S. Mitra, S. Fusi and G. Indiveri, Real-time classification of complex patterns using spike-based learning in neuromorphic VLSI, IEEE Trans. on Biomed. Circuits and Syst. 32-42 (1), 2009.

  7. T. Binzegger, R. Douglas and K. Martin, A quantitative map of the circuit of cat primary visual cortex, J. Neurosci. 24 (39), pp. 8, 2004.

  8. J. Schemmel, D. Brüderle, K. Meier and B. Ostendorf, Modeling Synaptic Plasticity within Networks of Highly Accelerated I&F Neurons, ISCAS 2007, 2007.

  9. M. Noack, J. Partzsch, C. Mayr and R. Schüffny, Biology-derived synaptic dynamics and optimized system architecture for neuromorphic hardware, 17th Int'l Conf. on Mixed Design of Integrated Circuits and Syst. MIXDES 2010, pp. 219-224, 2010.

  10. L. Buesing and W. Maass, A spiking neuron as information bottleneck, Neural Computation 22, pp. 1-32, 2010.

  11. S. Song, K. Miller and L. Abbott, Competitive Hebbian learning through spike-timing-dependent synaptic plasticity, Nat. Neurosci. 3 (9), pp. 919-926, 2000.

  12. E. Bienenstock, L. Cooper and P. Munro, Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex, J. Neurosci. 2 (1), pp. 32-48, 1982.

  13. A. Artola, S. Bröcher and W. Singer, Different voltage-dependent thresholds for inducing long-term depression and long-term potentiation in slices of rat visual cortex, Nature 347, pp. 69-72, 1990.

  14. P. Sjoestroem, G. Turrigiano and S. Nelson, Rate, timing, and cooperativity jointly determine cortical synaptic plasticity, Neuron 32, pp. 1, 2001.

  15. C. Mayr, M. Noack, J. Partzsch and R. R. Schüffny, Replicating experimental spike and rate based neural learning in CMOS, IEEE Int'l. Symp. on Circuits and Syst. ISCAS 2010, pp. 105-108, 2010.

  16. G.-Q. Bi and M.-M. Poo, Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type, J. Neurosci. 18 (24), pp. 102, 1998.

  17. C. Clopath, L. Büsing, E. Vasilaki and W. Gerstner, Connectivity reflects coding: a model of voltage-based STDP with homeostasis, Nat. Neurosci. 13, pp. 344-352, 2010.


 
DOI:  10.2417/1201012.003363




Tell us what to cover!

If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.