This workshop was held on 4 March 2008, Reston, VA, USA.

SyNAPSE is a new programme about to be launched by DARPA in the form of a Broad Agency Announcement (BAA), expected 17 March 2008. SyNAPSE stands for Systems Neuromorphic Adaptive Plastic Scalable Electronics. The proposed programme is about five years long, and has some very ambitious targets. For further information from DARPA itself, see the website, from which one may join the Teaming website to get access to further material. The BAA is expected to be issued for one year, but there will be an initial first round, whose current closing date is proposed to be May 2 2008. DARPA expect the community to organize themselves to produce full teams (and specifically, not to apply for single parts of the programme).

Outline of the proposed project

The proposed project is in five phases. Phase 0 (which is expected to be completed in 9 to 12 months) is concerned with demonstrating capabilities: basic synapse and neuron development, basic architecture development, and preparatory studies for emulation, simulation, and for developing capabilities for successful performance within an environment. One important issue in phase 0 is the demonstration of technologies which show themselves to be capable of scalability to very high densities and low power consumptions. Interestingly, for phase 0, many of these are oriented round developing a hardware capability for spike time dependent plastic synapses. Phase 1 extends the elements in phase 0, with the expectation of the demonstration of the core circuits in hardware, including the specification of (possibly novel) chip fabrication processes. A full-scale design methodology should be produced, supporting capabilities for huge numbers of synapses and neurons (of the order of 1014 and 1010 respectively). Large-scale simulation tools are expected in this phase. Interaction with the environment through a single major sensory modality is also proposed here (and this is expected to be visual).

Phase 2 expects proper benchmark systems to be put in place using the technologies developed earlier. Actual chips (or chip sets) with biological scale numbers and densities of synapses and neurons are expected. Large-scale simulation tools are expected to become available, and the sensory environment should be extended to include another sensory modality: further, the entity should be able to function in the environment and should be able to seek resources and survive. Phase 3 expects full-scale chips and hardware (with the requisite design environment) to be available, with additional senses such as touch, and symbol manipulation. Performance at the level of a “cat” should be possible.

The systems are to be tested on a planning and decision task, on a sensory perception task including performance on identification and classification, and on a navigation task in a complex dynamic environment. The expectation is a “a collection of cognitive capabilities” which are comparable with animals like mice and cats. The precise tasks are not specified, and may be defined by the proposing teams themselves, but they do need to be in keeping with the above. More detailed information on the call may be found on the teaming website.


This is a courageous attempt to make a real step forward in Neuromorphic systems technology. The technologies proposed are a long way from market or deployment in real situations, and this should help to advance them in ways in which the current low levels of funding in this area have failed. Hopefully some real development work will be done towards the aims stated above: however, they are very ambitions indeed. The worst case scenario is that the Phase 0 criteria are too difficult, so that there is funding for a short time, which is then pulled.

There were a number of surprises to me in what they were seeking.

Firstly, there was no mention of the use of Neuromorphic systems for sensory perception at all. This surprised me because my own view is that integrating intelligent sensors is absolutely critical to rapid reactions to the environment. If one has to perform a great deal of processing on current sensory system inputs rapidly (like video cameras or microphones) in order to get the types of information that one is interested in (like the location of an object, or the fact that something is moving) then the absence of intelligent sensors is going to make the system slower. Additionally, there has been (and still is) quite a lot of ongoing work in this overall area.

Secondly, there seems to me be a disjunction in the thinking underlying the programme: much of it is about producing the underlying technology and hardware, adaptive synapses, and neurons etc. Then DARPA discuss the sorts of things they expect: effective vision, survivability, appropriate behaviour in the (sensed) environment. Yet this is a little like expecting the availability of (say) very cheap and small parallel computers to enable a new set of capabilities from systems of them. Yes, there has been a great deal of work on adaptive neural networks, mostly using simulation, and yes, the capability of performing all these types of simulation directly in hardware would be interesting, but this is not the same as saying that it would directly enable essentially cognitive agents. Even in restricted environments, such as multi-player games, we don’t have cognitive agents which really utilize their (virtual and relatively simple) environments like a rat or mouse or cat does. There is a great deal of work required to produce what they seek.

In purely technological scope, the aims are very high. The speaker (Todd Hylton) noted that he did not believe that current CMOS technologies would be able to achieve what was being requested. In the poster session, there were a number of technologies represented which mixed MEMS and CMOS, and novel 2-terminal resistive components, suggesting that there are some candidate technologies which may be able to step into this area.

One last comment ...

I went to this meeting straight from the AGI 08 conference in Memphis, which was about artificial general intelligence, and included quite a lot of discussion about the singularity: the point at which man builds machines more intelligent than brains (and then they design yet more intelligent machines, leading to a positive-feedback based acceleration in intelligent machine design, resulting, perhaps, in non-human ultra-intelligent machines). Perhaps as a result, I couldn’t help wondering if DARPA were interested in building a machine which will take them towards the singularity, before anyone else does. One may or may not believe in the singularity (and I for one am not convinced), but if it’s likely to be true at all, I’m sure that DARPA would like to have it before they do (whoever they may be: one possibility is de Garis’s China-Brain Project developing it for the Chinese).

If this is the case, then they have taken slightly odd mode of operation, looking for a hardware implemented neural system, rather than some sort of hybrid machine (which was, more or less, the view of the AGI 08 meeting). But this might explain the rather strange omission of sensory systems, and the very strong push for extraordinarily large numbers of dynamic adaptive synapses and neurons, and the concentration on silicon implementation. Of course, the aim DARPA seek is rat or perhaps cat-level intelligence (whatever exactly that may mean). Presumably they would also like to be able to tell this machine what to do (but that wasn’t clear from the presentations). Yet if they are correct, and one can build some sort of mammalian-level intelligence (and rats aren’t that stupid) from this proposed technology, it might be possible to extend it to human and above- human levels of intelligence, and by extension then towards the singularity.

12 March 2008