Report on the 2003
WORKSHOP ON NEUROMORPHIC ENGINEERING
Telluride, CO
Sunday, June 29 to Saturday, July 19, 2003
Avis Cohen, Ralph Etienne-Cummings, Tim Horiuchi, Giacomo Indiveri, Shihab Shamma
and
Rodney Douglas, Christof Koch and Terry Sejnowski



Copyright c 2003, G. Indiveri, T. Horiuchi, R. Etienne-Cummings, A. Cohen, S. Shamma, R.
Douglas, C. Koch, and T. Sejnowski.
Permission is granted to copy, distribute and/or modify this document under the terms of the
GNU Free Documentation License, Version 1.1 or any later version published by the Free Soft-
ware Foundation; with the no Invariant Sections, with no Front-Cover Texts, and with no Back-
Cover Texts. A copy of the license is included in the section entitled ”GNU Free Documentation
License”.
Image appearing on the 2001 Workshop on Neuromorphic Engineering t-shirt
Document edited by Giacomo Indiveri.
Institute of Neuroinformatics, Zurich.
August 2003

Contents
1
Summary
4
1.1
Workshop background
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2
Workshop Highlights for 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.3
Workshop Participants
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.4
Workshop Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.5
The Computational Neuroscience Group . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
1.6
Biological Significance of the Workshop . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
1.7
Other Workshop Related Developments . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
1.7.1
The Institute for Neuromorphic Engineering . . . . . . . . . . . . . . . . . . . . . .
8
1.7.2
The RCN and its relationship to the Telluride Workshop . . . . . . . . . . . . . . .
8
1.7.3
Science of learning - Progress in previous grant cycle . . . . . . . . . . . . . . . . .
10
2
Telluride 2003: the details
13
2.1
Applications to Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
2.2
Funding and Commerical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
2.3
Local Organization for 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
2.4
Setup and Computer Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
2.5
Workshop Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
3
Tutorials
25
3.1
Analog VLSI (aVLSI) Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
3.2
Floating Gate Circuits Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
3.3
Online learning tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
4
The Auditory Project Group
32
4.1
Noise Suppression
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
4.2
Speech Spotting in a Wide Variety of Acoustic Environments Using Neuromorphically In-
spired Computational Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
4.3
Sound Classification
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
4.4
Sound Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
4.5
AER EAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
5
The Configurable Neuromorphic Systems Project Group
57
5.1
EAER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
5.2
AER competitive network of Integrate–and–Fire neurons . . . . . . . . . . . . . . . . . . .
58
1

Neuromorphic Engineering Workshop 2003
5.3
Constructing Spatiotemporal Filters with a Reconfigurable Neural Array . . . . . . . . . . .
60
5.4
An AER address remapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
5.5
Information Transfer in AER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
71
6
The Vision Chips Project Group
73
6.1
Vergence Control with a Multi-chip Stereo Disparity System . . . . . . . . . . . . . . . . .
74
6.2
Motion Parallax Depth Derception on a Koala . . . . . . . . . . . . . . . . . . . . . . . . .
76
6.3
A Serial to Parallel AER Converter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
77
7
The Locomotion and Central Pattern Generator Project Group
80
7.1
Neural Control of Biped Locomotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
7.2
Analysis of a Quadruped Robot’s Gait Behavior . . . . . . . . . . . . . . . . . . . . . . . .
87
7.3
Posture and Balance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
8
The Roving Robots Project Group
97
8.1
Machine vision on a BeoBot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
8.2
Blimp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
98
8.3
Hero of Alexandria’s mobile robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
8.4
Cricket phonotaxis in silicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8.5
Navigation with Lego Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.6
Snake- and Worm-Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.7
Spatial representations from multiple sensory modalities
. . . . . . . . . . . . . . . . . . . 107
8.8
Biomorphic Pendulum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
9
The Multimodality Project Group
109
9.1
Spatial representations from multiple sensory modalities
. . . . . . . . . . . . . . . . . . . 110
9.2
The cricket’s ears on a barn owl’s head, can it still see? . . . . . . . . . . . . . . . . . . . . 115
9.3
Fusion of Vision and Proprioception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10 The Computing with liquids Project Group
122
10.1 Using an AER recurrent chip as a liquid medium
. . . . . . . . . . . . . . . . . . . . . . . 123
11 The Bias Generators Project Group
128
12 The Swarm Behavior Project Group
130
13 Discussion Group Reports
133
13.1 The Present and Future of the Telluride Workshop: What are we doing here? . . . . . . . . . 133
13.2 Trade-offs Between Detail and Abstraction in Neuromorphic Engineering . . . . . . . . . . 135
13.3 Bioethics Discussion Group
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
13.4 The Future of Neuromorphic VLSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
13.5 Practical Advice on Testbed Design
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
13.6 Teaching Neuroscience With Neuromorphic Devices . . . . . . . . . . . . . . . . . . . . . 144
14 Personal Reports and Comments
147
A Workshop participants
151
2

Neuromorphic Engineering Workshop 2003
B Equipment and hardware facilities
154
C Workshop Announcement
158
D GNU Free Documentation License
163
3

Chapter 1
Summary
1.1
Workshop background
The Telluride Workshops arose out of a strategic decision by the US National Science Foundation to en-
courage the interface between Neuroscience and Engineering. At the recommendation of the Brain Working
Group at NSF, a one day meeting was held in Washington on August 27, 1993. The recommendations of
this group were for NSF to organize and fund a hands-on workshop that would draw together an interna-
tional group of scientists interested in exploring multi-disciplinary research and educational opportunities
that integrates biology, physics, mathematics, computer science and engineering. This field was dubbed neu-
romorphic engineering by its founder, Prof. Carver Mead. The main goal of this field is to design, simulate
and fabricate artificial neural systems, such as vision systems, head-eye systems, auditory systems, olfaction,
and autonomous robots, whose architecture and design principles are based on those of biological nervous
systems. The first Telluride Workshop was held in 1994, and they have been held annually since then (for
reports, see our Telluride homepage at http://www.ini.unizh.ch/telluride).
1.2
Workshop Highlights for 2003
This year’s three week summer workshop included background lectures (from leading researchers in bi-
ological, computational and engineering sciences), practical tutorials (from state-of-the-art practitioners),
hands-on projects (involving established researchers and newcomers/students), and special interest discus-
sion groups (proposed by the workshop participants). There were 74 attendees, composed of 8 organizers,
9 administrative and technical staff members, 23 invited speakers, 8 neurobiologist (a joint workshop or-
ganized by Terry Sejnowski) and 26 applicants/students (as detailed in Section A and at our web-site at
http://www.ini.unizh.ch/telluride). Participants were encouraged to become involved in as many of these
activities, as interest and time permitted. Two daily lectures (1.5 hrs each) covered issues important to the
community in general, presenting the established scientific background for the areas and providing some of
the most recent results. These lectures spanned most of the diverse disciplines comprising neuromorphic en-
gineering. Participants were free to explore any topic, choosing among the 8 workgroups and approximately
25 hands-on projects, and five interest/discussion groups.
Of note, there was a special discussion group on bioethics issues in neuromorphic engineering, where we
attempted to identify sensitive areas given the depth of the research and the sophistication of artificial systems
currently being pursued by the community. We would like to predict potential ethical problems before
4

Neuromorphic Engineering Workshop 2003
they arise. The highlights of the hands-on projects were 1) highly effective noise suppression for spoken
speech (Auditory), 2) visual motion detection using distributed recurrent neural circuits (Configurable Neural
Systems), 3) implementation of 1A reflex in a biped using a silicon neural CPG model (Locomotion), 4)
fusion of proprioception and vision to implement visually triggered motor reflexes (Multi-model), 5) control
of a snake and worm robots (Roving Robots) and 6) neural implementation of stereo vergance using disparity
energy filters (Vision). All these projects, which were performed by collaborators who met at the Workshop,
produced novel results that combine various ideas from neuroscience and engineering. We expect these
collaborations to continue, and for the results to be published.
A shift in the structure of the leadership and administration of the Workshop has also happen this year.
In previous years, the senior members of the organization committee have been primarily responsible for
the set-up and day-to-day operation of the Workshop. This year, the junior committee members, namely,
Giacomo Indiveri, Timothy Horiuchi and Ralph Etienne-Cummings, have taken a more prominent role in
the running the Workshop. These individuals will become the new face of the community for the future,
while receiving heavy support from the senior members of the field. We have reorganized the administration
of the Workshop by using our newly formed Institute of Neuromorphic Engineering (INE) to handle all the
financial and logistical details. The INE is a non-profit foundation that we hope will be formal society for the
field. By using the INE, we simplify the steps required to execute contracts (various layers of bureaucracy
exists with universities), which greatly minimizes our financial and logistical overhead. We expect to use the
INE to directly apply for additional funding for the Workshop and other related activities.
1.3
Workshop Participants
The participants were drawn from academia (88%), government laboratories (7%), and industry (5%). Given
the youth of the field, it is not surprising that most of the participants are from academia. Now that we are
making inroads into industry, we hope to have a larger industrial contingent in the future. The diverse
backgrounds of the participants span medicine (3%), biology (5%), computational neuroscience (14%), neu-
rophysiology (15%), psychophysics (3%), engineering (40%), computer science (8%), and robotics (12%).
Of these participants, 60% are from US organizations, 30% from Europe, 10% from Far-East and Oceana,
21% are women and 7% (to our judgement) are minorities. Despite our success, we clearly have to work
harder to recruit more women and minorities to the workshop. We have a new strategy which we believe
should be successful in improving the distribution.
1.4
Workshop Organization
The workshop provided background material in basic cellular and systems neuroscience as well as practical
tutorials covering all aspects of analog VLSI design, simulation, layout, and testing. In addition, sensory-
motor integration and active vision systems were also covered. Working groups were established in robotics
(focused on the use of Koalas, LEGO and custom designed legged robots), locomotion (covering central
pattern generators and sensory-motor interactions), configurable neural systems (covering multi- chip sys-
tems interconnection and computation with large numbers of silicon neurons), active vision (covering stereo
vision and visual motion detection), audition (covering speech detection, localization and classification),
proprioception (covering the multi- sensory fusion and behaviors), attention and selection, and finally in-
dustrial applications (interaction with Iguana Robotics, Inc., and the Wowee subsidiary of Hasbro Toys).
Throughout the three weeks we held morning lectures on all aspects of neuromorphic engineering (see the
5

Neuromorphic Engineering Workshop 2003
workshop schedule in Section 2.5). In the first week, in addition to the morning lectures, a group of tuto-
rial lectures covered basic neurophysiology, the vertebrate auditory system, central pattern generator theory,
transistors, simple circuit design, an introductions to the programming environments needed for the active
vision and audition systems (introduced by the various course participants), and introduction to the Koalas
and bipedal robots. In the second week participants focused on workgroup projects, ranging from model-
ing of the auditory cortex, to working with roving robots, to designing asynchronous VLSI circuits. In the
third week, participants focused on the completion of their hands-on projects, collecting data and analysis of
their results, while attending lectures, discussion groups, and documenting their work. This report contains
summaries of all their hard work.
This year we continued our efforts of generating more sophisticated behaving neuromorphic systems
by integrating physical and theoretical results from various labs around the world. This Workshop is the
ideal venue at which this type of integration can occur because the researchers bring their research set-ups to
Telluride; it would be a logistical nightmare for labs to organize such a meeting on their own. Furthermore,
the spontaneous collaboration that usually emerges out of this Workshop would not happen without it. We
hold this aspect of our Workshop to be the most important because it provides a convergent location where
leading researchers in the field meet every year.
Projects carried out by subgroups during the workshops included active vision, audition, sonar, olfac-
tion, motor control, central pattern generators, robotics, multi-chip communication, analog VLSI and learn-
ing. In addition to projects related to mimicking neurobiological systems, a number of efforts have been
to develop technology that will facilitate neurobiological research. Examples include the development of a
“Physiologist’s Friend” visually-responsive cell and the discussions of neural amplifier design and RF micro-
transceivers for implantation. The results of these projects are summarized in this and previous reports (see
http://www.ini.unizh.ch/telluride/previous/).
In the next chapter we describe the details and logistics of the workshop, ranging from the application
announcement, to the description of the equipment provided and the detailed schedule of the workshop.
1.5
The Computational Neuroscience Group
Again this year, the one-week presence of Terry Sejnowski’s group of invited Computational Neuroscience
speakers (8 speakers this year) made an additional positive impact on the scientific intensity of the workshop,
providing yet another opportunity for the workshop participants to interact with some of the top neuroscien-
tists discussing the cutting edge theory and data. Terry Sejnowski has been organizing an annual week-long
set of lectures on Computational Neuroscience at the Neuromorphic Engineering Workshop since 1999.
Around 12 systems and computational neuroscientists give lectures to the participants of the workshop,
which generate intense discussions on some of the most important themes in neuroscience: sensorimotor
integration, memory and attention and reward learning. The interactions between the engineers in the neu-
romorphic workshop and the neuroscientists has been highly beneficial to both groups: The neuroscientists
benefit from the synthetic approach of the engineers who have confronted many of the same problems that
are faced by animals and have come to understand the nature of these problems, if not their solutions. The
engineers in turn gain insights from biology to design a new generation of robust autonomous systems that
can survive in an uncertain world, something that biological systems have accomplished though millions
of years of evolution. The funding for the neuroscientists is provided by private foundations, including the
Gatsby Foundation, the Swartz Foundation and the Sloan Foundation.
6

Neuromorphic Engineering Workshop 2003
1.6
Biological Significance of the Workshop
The technology transfer from biology to engineering promoted by this Workshop is clearly evident. The
neuromorphic engineering field is built on the premise that engineered systems can benefit greatly by getting
algorithmic inspiration from biological organisms. Consequently, a large amount of funding, by DoD pri-
marily, has been applied to the development, so called, biomimetic systems. The biological inspiration for
most of the biomimetic systems developed so far, however, has been unrecognizable. We contend, therefore,
that the neuromorphic community is the primary forum where biological systems are studied and ultimately
morphed into engineered artificial systems. The term “morphed” is used to emphasis that our community is
also interested in the form, as well as the function, of the biological systems. Our place is further solidified
by the fact that, in our community, pure biology and pure engineering research coexists in extremely close
proximity. This is reflected in the backgrounds of the participants of the Workshop, the diversity of lecture
presented at the Workshop and the collaborations sparked by the Workshop. Hence, we plan to continue the
flow of information from the biological sciences to engineering by fostering strong links between the major
players in these areas, both at the Workshop and at home.
The return from engineering to biology takes multiple paths. Firstly, engineering provides new technol-
ogy that makes new biological experiments possible. For example, there are members of the community who
are interested in micro implantable telemetry systems. These systems can be used to monitor the function of
deep-brain neural circuits over a long period of time. Clearly, such a tool would be invaluable in studying
various functions of the nervous system. Other examples of this type include special apparatuses for do-
ing controlled psychophysical experiments, improved neural recording systems and spike sorting hardware.
There are members of the community work on all these examples.
Secondly, engineering can be used to decouple codependent stimuli such that the impact of the indi-
vidual components on the biological systems can be studied. A classical example of this type of benefit
to biological experiments can be observed in sound localization experiments. Typically, the interaural time
difference (ITD) and the interaural level difference (ILD) are codependent for externally generated sound
stimuli. However, by creating an engineered method of controlling the delays and levels of sounds that are
directly delivered to the ear canal, this codependence can be eliminated and the impact of each of these
acoustical cues on auditory neurophysiology can be studied independently. This technique has been used
by members of the auditory workgroup in psychoacoustic experiments that have revealed some interesting
details on sound localization process in humans. It will also be used to develop new 3-D audio headphones
that produce a better perception of spatially distributed sounds than is currently available on the market.
Thirdly, by using closed-loop feedback between engineered and biological systems, we can potentially
learn new details about how the biology operates. There are many aspects of neurophysiology that we can’t
study unless we can stimulate and record from neural wetware with the appropriate dynamical signals. This
is particular true for central pattern generators in lamprey spinal cords. My creating an artificial spinal
section, we hope to close the loop between silicon models of neural circuits and actual sections of the spinal
cord. This system will allow us to study the effect of spinal injuries on spinal signal processing and CPG
generation. Hence, an engineered system will afford details of the neurophysiology of the spinal cord to be
studied.
Fourthly, we can develop engineered models that can used to investigate the structure of the biological
systems. Clearly this is the most obvious example of “technology return” from engineering to biology. Mod-
els of neurobiological circuits are being developed, either in hardware or software, at an alarming rate in this
community. The predictive powers of these models can be tested against their biological master, thus, poten-
tially providing new ideas on how the biology is organized. There is a constant flow of information between
7

Neuromorphic Engineering Workshop 2003
biologists and engineers in developing robust models. This flow is strongly promoted by the Workshop and
all the activities that happen in the community outside the Workshop.
Lastly, when biologically inspired engineered systems make their way to the marketplace, new research
funding will be poured into biology in order to understand new systems that will eventually be used for
new products. Clearly, there is positive feedback loop between biologically inspired products and basic
research in biology, which is fueled by the commercial success of these products. To this end, the Workshop
encourages commercialization of neuromorphic ideas by involving industrial researchers in the workgroups.
This year, we had invited lectures and workgroup leaders from IBM, Hasbro Toys and Iguana Robotics, Inc.
In the past, we have had participants from venture capitalist groups, large corporations such as Intel, HP and
AT&T, and government and private research labs. We plan to continue and expand the role of industry in
this Workshop.
1.7
Other Workshop Related Developments
1.7.1
The Institute for Neuromorphic Engineering
An exciting development for our community occurred this year with the formation, incorporation and inau-
guration of the Institute for Neuromorphic Engineering (INE), our “Institute without Walls”. It is a non-profit
foundation, and will manage the Workshop and a number of activities designed to promote the neuromorphic
engineering field. The mission statement of the INE is:
The Institute of Neuromorphic Engineering Foundation, hence forth INE Foundation, shall be a non-
profit corporation concerned with fostering research exchange on neuromorphic engineering in their bio-
logical, technological, mathematical and theoretical aspects. The areas of interest are broadly interpreted
as those aspects of neuromorphic engineering which benefit from a combined view of biology, physical,
mathematical and engineering sciences.
The INE is also managing the NSF- sponsored Research Collaboration Network (RCN) to support the
continuation of the work started at Telluride. We find that as the field progresses, the complexity of the
projects are such that only a small part of the research question can addressed at the Workshop. The RCN
provides support to help the researchers (who are usually students from disparate labs) travel to each others’
lab to continue the work. As can be expected, we are extremely happy with this additional, yet separate
from the Workshop’s, funding to continue the development of the field by supporting year long projects.
The INE has two executive meetings a year, one in Telluride during the workshop and once in the Novem-
ber/December timeframe either at the Society for Neuroscience meeting or the Neural Information Pro-
cessing Systems meeting. We have and continue to sponsor multi-institution collaborative student projects,
workshop at various large scale conferences (such as IEEE ISCAS and NIPS) and lecturer exchange between
institutions.
1.7.2
The RCN and its relationship to the Telluride Workshop
There are presently two NSF grants funding Neuromorphic Engineering, one for the Telluride Workshop, and
one RCN grant to fund a research coordination network and the formation of an Institute for Neuromorphic
Engineering.
The Telluride Workshop has been held now for ten years. It has had funding from several sources, but
its major source, and primary source has been the NSF. Every year, as is documented in each annual report,
there are on the order of 60 people who attend the meeting. The attendees include 1) the organizers, 2) staff
8

Neuromorphic Engineering Workshop 2003
members who are typically students of the organizers, 3) “regulars”, or people who attend regularly and offer
special courses or projects for the group, 4) invitees, or individuals who present lectures to the group and stay
from one to three weeks, and finally, 5) the computational neuroscience group associated with Dr. Terrence
Sejnowski, who are funded and invited by Dr. Sejnowski.
The workshop is an intensive training with lectures in the mornings, and work on projects in the af-
ternoons and evenings. The projects are typically examples of research efforts that flow from the material
presented in the lectures. Students are also encouraged to bring their own projects to share and/or to work
on while at the workshop. Students are encouraged to work on no more than two projects.
The RCN award is to fund the development and coordination of a research network in neuromorphic
engineering. We are now entering the second year of funding for the RCN. It has been granted to fos-
ter research collaborations in neuromorphic engineering, to provide educational outreach and to develop a
website resource. To foster research, funds are made available for individuals to travel to laboratories of
neuromorphic engineers. A major use for these funds is to allow participants from the workshop who have
met researchers at Telluride and wish to begin or continue collaborative research with these researchers. This
extended research experience can cement the participants’ training beyond that of the three week interaction.
The funds also permit research begun at the workshop to mature to fundable projects through the accumu-
lation of preliminary data. The educational outreach allows participants to invite lecturers they heard at the
workshop back to their home institution for tutorials or introductory lectures to their departments and col-
leagues. The website resource will provide a repository for circuit designs, preprints and lecture notes from
the workshop. Thus, the RCN funds extend the experience and the opportunities for training the participants
at the Telluride workshop.
The RCN funds have not been used for funding any component of the workshop. While most of the funds
for collaborative research have been for projects related to individuals from the workshop, some projects
have been to unrelated individuals. These are requests for independent research projects from students who
have heard about the work and want to try something or have begun something with one of the laboratories
associated with neuromorphic engineering, often from among those that have attended the workshop, but not
always. Some of the outreach funded has been for activities that were not imagined when we applied for the
RCN (e.g., a newsletter organized and written by a writer who attended the workshop). Again, grants can and
have been made to individuals who are unassociated with either the workshop or with any of the laboratories
associated with the workshop. Thus, the RCN has allowed us to enlarge the scope of the community beyond
that of the workshop.
Another use we intend to make of these funds is to broaden the participation in the workshop for coming
years. We have begun to organize small workshops at sites and times unrelated to Telluride. Individuals that
are outside the group of organizers, but who have attended the workshop will organize these small workshops.
The plan is for the small workshops to be a way to recruit new faces and ideas for the Telluride workshop.
They will be short meetings, 2-4 days long, and not the intense time commitment that Telluride represents.
They will also expose new people to the field of neuromorphic engineering in a small and intimate context.
Such people would then be invited to attend the Telluride Workshop the year after the small meeting.
Thus, in summary, the RCN serves to enlarge the network of people in the community. The Telluride
Workshop depends the knowledge base and brings in many new young people to the field. It also exposes
them to research projects that the RCN then helps to develop further. The RCN also serves as a source
for outreach beyond the Telluride Workshop efforts, and allows student participants to bring Neuromorphic
Engineering back to their respective campuses.
9

Neuromorphic Engineering Workshop 2003
1.7.3
Science of learning - Progress in previous grant cycle
The adaptiveness and ecological competence of mammals reflects to a significant degree the extraordinary
penchant of this class for ”procedural learning”. Procedural learning allows astonishing performance gains
in visual, auditory, somatosensory, or motor tasks, and is underpinned by the surprising plasticity of adult
cortex (e.g., Recanzone, Schreiner, Merzenich, 1993, J Neurosci 13: 87-103; Karni, Bertini, 1997, Curr
Opin Neurobiol 7: 530-5; Fahle, Poggio, 2002, Perceptual learning, MIT Press). The potential danger is, of
course, that inappropriate and counter-productive plasticity will be triggered by ongoing sensory stimulation
or motor activity. Any system capable of procedural learning must therefore come equipped with highly
effective guards against ’mislearning’.
In mammals, the mechanisms that guard against ’mislearning’ are thought to include selective atten-
tion and possibly memory rehearsal during sleep (e.g., Karni, Bertini, 1997; Stickgold et al., 2001, Science
294: 1052-7; Fahle, Poggio, 2002). Selective attention is thought to differentiate between task-relevant and
-irrelevant stimuli, strengthening responses to the former and weakening responses to the latter throughout
the hierarchy of cortical areas (e.g., Desimone, Duncan, 1995, Annu Rev Neurosci 18: 193-222; Rolls, Deco,
2002, Computational Neuroscience of Vision, Oxford UP). As a consequence of this differential responsive-
ness, it is conceivable that synaptic weights are altered preferentially within the neural representation of
task-relevant stimuli. However, detailed computational studies paint a more complex picture (Zenger, Sagi,
2002, pp 177-196, in Fahle, Poggio, 2002; Hochstein, Ahissar, 2002, Neuron 36:791-804), making this view
almost certainly an oversimplification.
A theoretical understanding of plasticity — and the mechanisms for suitably channeling plasticity —
faces formidable obstacles. Charting the dynamic regimes of recurrently connected, spiking networks is al-
ready a thorny theoretical problem and this is even more true when activity is allowed to alter connectivity via
plastic synapes (Amit, 1992, Modelling Brain Function: The world of attractor neural networks, Cambridge
UP; Abbott, Nelson, 2000, Synaptic plasticity: taming the beast. Nat Neuroscience 3: 1). Recent
advances in the analysis of realistic spiking networks with plastic synapses rely on a combination of ana-
lytical methods (dynamic mean field approaches) and simulation and/or hardware implementation (Dayan,
Abbot, 2001, Theoretical Neuroscience, MIT Press; Gerstner, Kistler, 2001, Spiking Neurone Models, Cam-
bridge UP; Mattia, Del Giudice, 2002, Population dynamics of interacting spiking neurons, Phys Rev E: Stat
Nonlin Soft Matt Phys 66; Giacomo: some Fusi references?).
At this point, the science of learning makes contact with neuromorphic engineering. For the neuro-
morph’s goal is to understand spiking networks that learn so well that he/she can actually build such net-
works. In the Telluride Workshop, we have several groups who work to develop theories of saliency, and
attention. Laurent Itti, at the University of Southern California, a former post-doctoral fellow with Christof
Koch, and Jochen Braun, University of Plymouth, U.K., have developed neural network models that can
pick out salient features in even a highly complex environment. They incorporate converging experimental
evidence that attention modulates, top-down, early sensory processing. Both also do psychophysical ex-
periments on human subjects to further refine the available data. Based on a detailed model of these types
of data and early visual processing with its prediction of attentional modulation, they recently proposed that
attention activates a winner-take-all (WTA) competition among early sensory neurons. It is this type of WTA
network developed by Giacomo Indiveri, which has been implemented in silicon and put on our robots in the
projects described in the final report. This effort on attention, in summary then, begins from the experimen-
tal data, builds network simulations, and finally, implements the theories on chips and places those chips on
autonomous robots to test their efficacy in a variety of environments.
Most recently, as a spin-off from the Workshop, an EU-funded project has been funded to combine
10

Neuromorphic Engineering Workshop 2003
spiking networks for attention with spiking networks for learning. Indiveri, and his collegues at the Institute
of Neuroinformatics in Zurich have have implemented spiking winner-take-all networks in analog VLSI,
applied such networks to sensory-motor systems, and are developing both software and hardware simulation
tools for simulating and implementing large VLSI networks of spiking neurons with analog circuits for
synapses and integrate-and-fire neurons that are suitable for winner-take-all networks and selective attention
systems.
The EU-funded project ALAVLSI (“Attend-to-learn and learn-to-attend with neuromorphic, analogue
VLSI”) is the brainchild of Telluride 2001, which brought together Braun and Indiveri, two major players in
the project. The goal of the project, which started in October 2002, is to combine an analogue VLSI imple-
mentation of selective attention with an analogue VLSI implementation of associative learning to develop
a general architecture for perceptual learning. To verify the functionality of this architecture, performance
on categorizing (i) one of several superimposed patterns of visual motion and (ii) one of several simultane-
ous samples of human speech and/or animal vocalizations will be established. The salient aspects of this
innovative project can be summarized as follows:
• Inspired by current understanding of the functional modularity of human perceptual learning. Specif-
ically, it builds on cortical models of stimulus saliency and selective attention, in which attention acts
as a biasing factor in the competition between superimposed/simultaneous sensory inputs.
• Focused on the mutually beneficial interaction between attention and learning. Explores the possibility
that a simple and coherent architecture accounts for many behavioral manifestations of this interaction
(top-down attention, attend-to-learn, learn-to-attend, multistable perception).
• Takes advantage of deep functional analogies between visual and auditory modalities (without at-
tempting to fuse them).
• Proposes a biologically inspired architecture for representing real-world stimuli in an artificial system.
• Employs neuromorphic VLSI technology to implement networks of spiking neurons interacting via
fixed and plastic synapses.
• Implements a biologically realistic many-to-many connectivity with the help of an AER communica-
tion infrastructure.
Coming from a different direction, Brian Scassellati, Yale University, has been working on robots that
interact with humans to study how children acquire social skills. One of the major focuses of Scassellati’s
work has been on how children develop social skills and an understanding of other people. Children gradually
acquire many skills that allow them to learn from their interactions with adults, such as responding to pointing
gestures, recognizing what someone else is looking at, and representing that other people have beliefs, goals,
and desires that differ from those of the child. These abilities have often been called a ”theory of mind”
and are believed to be critical for language acquisition, for self-recognition, and in the development of
imaginative play. Computational models of these skills are being developed, implemented, and tested on
robotic platforms currently under construction. This work will ultimately be used to teach autistic children
social skills. His previous robots have performed tasks such as imitating human arm gestures, distinguishing
animate from inanimate stimuli based on self-propelled motion criteria, and learning to reach for visual
targets.
11

Neuromorphic Engineering Workshop 2003
In addition to being instrumental in the conception of this project, Telluride is the most important forum
of exchange for ideas and information relating to similar projects. It would be difficult to conceive of such
ambitious projects without the neuromorphic community gathered at Telluride.
12

Chapter 2
Telluride 2003: the details
This year the workshop took place in the Telluride Elementary School (the same location that was used for
the workshops held from 1994 to 1999, and in 2). As in previous years, we occupied four rooms
for the lecture and project activities. As detailed in the workshop schedule, we had many interesting group
lectures in the morning, special topics lectures in the afternoon, and more group lectures occasionally the
evenings. The workshop was very demanding; when participants were not busy following lectures, they were
busy with the various project groups or discussion groups. In the following Sections we describe in detail all
of the processes that lead to the completion of the workshop, starting from the description of the application
process to the description of all the activities of the project workgroups.
2.1
Applications to Workshops
We announced the workshop via our existing Telluride home-page on the World Wide Web, via email to
previous workshop participants to post at their universities as well as to various mailing lists and NewsNet
newsgroups in January, 2003. We specifically asked our colleagues in Switzerland, the United Kingdom,
France, Italy and Germany where we know of several active research groups to post the workshop announce-
ment for their students. The text of the announcement is listed in Appendix C.
The following is the list of mailing lists and newsgroups that we submitted the announcement to:
,
,
,
,
,
,
,
,
,
,
alt.consciousness,
,
bionet.neuroscience,
annrules@fit.qut.edu.au,
comp.ai.alife,
,
comp.ai.neural-nets,
,
comp.robotics.misc

ethz.general,
,
sci.cognitive,
,
unizh.general,
,
,
,
,
,
13

Neuromorphic Engineering Workshop 2003
,
,
,
,
,
,
,
,
,
,
,
,
connect ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
In the end, we received 68 applications, of which we selected 26 as participants for the workshop. We
also invited a 23 key speakers from academia, government facilities and industry to contribute presentations
as well as participate in the workshop. As with previous years, the number of well-qualified applicants was
high and many of the applicants that were not accepted would have made good participants. The selection of
participants for the workshop was made by the current and founding directors of the workshop (S. Shamma,
A. Cohen, R. Etienne-Cummings, T. Horiuchi, G. Indiveri, R. Douglas, C. Koch and T. Sejnowski), all of
whom received copies of the full applications. The selection process was carried out by means of a “voting”
web-page, accessible only to the organizers.
We selected participants who had demonstrated a practical interest in neuromorphic engineering, had
some background in psychophysics and/or neurophysiology; could contribute to the teaching and the practi-
cals at the workshop; could bring demonstrations to the meeting; were in a position to influence the course
of neuromorphic engineering at their institutes; or were very promising beginners.
We were very interested in increasing the participation of women and under-represented minorities to
the workshop and actively encouraged applicants from companies.
The final profile of the workshop was satisfactory (see Appendix A). The majority of participants were
advanced graduate students, post-doctoral fellows or young faculty; seven participants came from non-
academic research institutions. Of the 49 selected participants (not counting organizers), 29 were from US
institutions, with the remainder coming from Switzerland, Spain, Japan, France, Australia, Austria, Israel,
Belgium, Italy, Hong Kong, and the UK. Eleven of the participants were women.
2.2
Funding and Commerical Support
Again this year we asked participants to pay a registration fee of $275 in order to reduce the workshop costs.
The registration fee accounted mainly for the daily running expenses of the workshop. The major work-
shop costs (including student travel reimbursements, equipment shipping reimbursements, accomodation
14

Neuromorphic Engineering Workshop 2003
expenses, etc.) were funded by the following sources. The total outside funding was $107,500:
• The U.S. National Science Foundation
• The U.S. Defense Advanced Research Projects Agency
• The Engineering Research Center, Caltech
• The Whitaker Foundation
We would also like to thank the following companies for their support:
• Tanner Research - for providing the VLSI layout tools, Ledit and LVS for this workshop.
• Mathworks, Inc. for providing the software package MATLAB.
• K-Team for providing and suporting the Khepera and Koala robots.
• NeTraverse for providing Win4Lin for this workshop.
• LEGO for providing the MindStorms and VisionCommand kits.
2.3
Local Organization for 2003
As it was last year, much of the workshop organization was handled by an interactive webpage that allowed
staff and participants to “log-in” and select their accommodations (e.g., room type, roommates), enroll in
various workshops, inform the organizers of any hardware they planned to bring, and request software pack-
ages that they needed. This webpage also provided full accounting details (actual and estimated expenses)
to organizers at all times. Information about each participant’s expenses and what fraction of these expenses
the workshop would reimburse were included. The exact amount of funding was not expected to be available
until after the workshop at which time, a quick, accurate assessment of our expenses would be necessary to
speed up the reimbursement process.
The information we gathered through the webpage before the beginning of the workshop allowed us to
better organize the lectures and tutorials and to improve on the housing arrangements (for example, par-
ticipants had a chance to choose their condo-mates and condominium locations even before arriving in
Telluride). All of the housing arrangements were carried out in collaboration with the Telluride Summer
Research Center (TSRC), using the workshop’s interactive web-pages. By obtaining longer-term contracts
with local condominiums, we were able to provide adequate housing at reasonable rates.
The hour-to-hour scheduling of talks and events was also coordinated via the web site. As in previous
years, we use this web site for interested parties outside the workshop to peek in at our progress; we also
featured several webcams that were pointed in the projects room.
As we did last year we occupied two rooms of the older wing of the Elementary School and two rooms in
the newly renovated expansion to the Elementary School (historic Telluride High School). The four rooms
were: computer and informal meeting room, a lecture and discussion group room, and two project and
tutorial rooms. This year’s setup was smoother than last year’s because we had the same rooms as last year.
As a continuation of last year’s trend towards participant-supplied computing power (laptops) we have been
shifting our efforts towards providing flexible networking options (more hubs and wireless access points)
and better common laboratory equipment.
15

Neuromorphic Engineering Workshop 2003
This year, we had several of the staff coordinate the daily refreshments following lectures and assist in
photocopying, buying supplies, coordinating several of our evening events (e.g. BBQ) and for local public-
relations (4th of July parade entry). Two of our co-directors are now on the board of the TSRC (Christof
Koch and Avis Cohen), and were able to meet with the board this summer. This service brings us greater
understanding of the issues for the school, and provides access for us to give them input on the use of the
funds for the school (e.g., we were able to use one of their LCD projectors for the lectures).
As described above, in addition to the main workshop, we had a separate Computational Neuroscience
discussion group invited and coordinated by Terrence Sejnowski of the Salk Institute. They intermingle with
our student group and contribute advanced seminars in their fields of research.
As in previous years, we have interacted with the local community by giving two public talks aimed at
the scientifically-curious lay person and by marching under the banner of ”Neuromorphic Engineering” in
the local 4th-of-July parade. This year we did not compete for awards, but did make a splash with over a
hundred Biobugs that were contributed to the workshop by Mark Tilden, the inventor of this robotic toy. Our
wacky antics in representing the robotics and neurosciences continues to be popular with the local public.
2.4
Setup and Computer Laboratory
The software/hardware setup lasted from Thursday, June 26 to Sunday, June 29. The staff provided full sup-
port for a network of approximatelly 10 computers and wireless access for laptops belonging to participants.
This support included various internet services such as remote logins, file transfers, printing, electronic mail,
and a world wide web server in addition to the technical support required to interface specialized hard-
ware (such as Koala and Khepera robots, motorized pan-tilt units, chip testing equipment, custom chips and
boards, PIC microcontroller programmers, etc.).
The computers where divided into three usage areas. The first was a general computer lab for running
simulations, designing circuits, running demos, writing papers, or general internet access such as web brows-
ing. Another set of computers where used to control, collect, and process data from robots. The third set of
computers were used as VLSI test stations.
Throughout the entire course, we supported workstations, robots, oscilloscopes and other test devices
brought by all participants.
A World Wide Web site describing this year’s workshop, including the schedule, the list of participants,
its aims, and funding sources can be accessed at http://www.ini.unizh.ch/telluride1.
The computer lab proved to be very constructive since it not only allowed participants to demonstrate
and instruct others about their software, but it offered the opportunity for others to make suggestions and
improvements.
Computers, chip-testing equipment, and supplies were provided mainly by GeorgiaTech, University of
Maryland, and the Institute of Neuroinformatics in Z¨urich. This equipment was loaded and carried to Tel-
luride using two rental trucks, or shipped from Z¨urich.
2.5
Workshop Schedule
The activities in the workshop were divided into three categories: lectures, tutorials and project workgroups.
1We strongly recommend that the interested reader scan this homepage. It contains many photos from the workshop, reports,
lists of all participants etc.
16

Neuromorphic Engineering Workshop 2003
The lectures were attended by all of the participants. Lectures were presented at a sufficiently elementary
level to be understood by everyone, but were, nevertheless, comprehensive. The first series of lectures
covered general and introductory topics, whereas the later lectures covered topics on state-of-the-art research
in the field of neuromorphic engineering. In the past it was found that two 1.5 hour lectures rather than three
one-hour lectures in the morning session was better for covering a topic in depth as well as to allow adequate
time for questions and discussions and limit overload.
The afternoon sessions consisted mainly of tutorials and workgroup projects, whereas the evenings were
used for the discussion group meetings (which would often continue late into the night).
Sundays were left free for participants to enjoy the Telluride scenery. Typically, participants would go
hiking. This was a valuable opportunity for people to discuss science in a more informal atmosphere and
catch up on the various projects being carried out by the other participants.
The schedule of the workshop activities was as follows:
Sunday 29 June
Arrive in Telluride
Condo Check-In
• Evening:
17:00: Welcome reception @ elementary school
19:00: Tour of the workshop facilities (@ elementary school)
Monday 30 June
Hosts: Giacomo and Timmer
• Morning:
9:00 - 10:00: Giacomo and Timmer
outline the workshop
11:00 - 12:30: Christof Koch - “Computation and the Single Neuron” (timmer’s title for Christof)
• Afternoon:
14:00 - 15:00: Workgroup and Discussion Group Advertisements
15:00 - 16:00: Rodney Douglas - “Artificial and Natural Computation Tutorial”
16:00 - 17:00: Shihab Shamma - “Auditory and Visual Computation Tutorial”
17:00 - 18:00: Avis Cohen - “Motor Tutorial”
• Evening:
19:30 - 20:30: Introductions by applicants (oral and gestural communication)
20:30 - 22:00: John Allman - “Neurons of Frontoinsular and Anterior Cingulate Cortex Related to Risk,
Reward and Error” (Computational Neuroscience Group)
Tuesday 1 July
Hosts:
• Morning:
9:00 - 10:30: Avis Cohen - “CPGs and Locomotion”
11:00 - 12:30: Harvey Karten - “Vision and Evolution” (Computational Neuroscience Group)
• Afternoon:
17

Neuromorphic Engineering Workshop 2003
14:00 - 15:30: Tobi Delbruck - “Misha and her Stero Chip and Why You Should Care About it”
16:00 - 18:00: Discussion Group Presentations
17:00: Floting Gate Workgroup / aVLSI Workgroup
• Evening:
19:00: BeoBot Workgroup
20:00 - 21:30: Bob Desimone - “” (Computational Neuroscience Group)
21:30 - 22:00: CNS Workgroup Initial Meeting
Wednesday 2 July
Hosts:
• Morning:
9:00 - 10:30: Laurent Itti - “Computational Modeling of Visual Attention and its Applicability to
Neuromorphic Systems”
11:00 - 12:30: Terry Sejnowski - “How is Color Represented in the Visual Cortex” (Computational
Neuroscience Group)
• Afternoon:
12:30: Multi-modality Workgroup
14:00 - 15:30: Steve Zucker - “” (Computational Neuroscience Group)
15:45 - 16:45: Nici Schraudolph - “Online Learning Tutorial I: Statistics on the Fly”
17:00: BBQ
• Evening:
19:00: Bias Generator Workgroup
19:00: BeoBot Workgroup
21:30 - 22:00: Vision Chips Workgroup
20:00 - 21:30: Bruce McNaughton - “” (Computational Neuroscience Group)
Thursday 3 July
Hosts:
• Morning:
9:00 - 10:30: Shihab Shamma - “Rapid Behaviorally-Dependent Plasticity in Auditory Cortex”
11:00 - 12:30: Wolfram Schultz - “Outcome Coding in Brain Reward Centers” (Computational
Neuroscience Group)
• Afternoon:
14:00: CNS Workgroup: Kwabena Boahen - “What are Address-Events?”
15:00: Audio Workgroup
16:00: Locomotion Workgroup
17:00: Roving Robot Workgroup
• Evening:
18:00: aVLSI Workgroup
18:00: Floating Gate Workgroup
19:00: BeoBot Workgroup
20:00 - 21:30: Barry Richmond - “” (Computational Neuroscience Group)
18

Neuromorphic Engineering Workshop 2003
Friday 4 July - FREE DAY FOR THE NEUROMORPHS!
Hosts:
• Morning:
PARADE!!
• Afternoon:
• Evening:
Saturday 5 July
Hosts:
• Morning:
9:00 - 10:30: Bert Shi - “Orientation Hypercolumns in Visual Cortex: Multiple and Single-Chip
Implementations”
11:00 - 12:30: Kwabena Boahen - “Wiring Feature Maps by Following Gradients: Silicon and
Mathematical Models”
• Afternoon:
13:00: Bias Generator Workgroup
14:00: Vision Chips Workgroup
14:00: Locomotion Workgroup
15:00: Audio Workgroup
16:00 - 19:00: Hero of Alexandria’s Mobile Robot Workgroup
17:00 - 18:00: aVLSI Workgroup Meeting
• Evening:
19:00 - 20:00: CNS Workgroup: Project Demos
20:00 - 21:30: Michaele Fee - “Bird Song Learning”
Sunday 6 July
• Free Day - Go Hiking! - Work on Projects!
Monday 7 July
Hosts:
• Morning:
9:00 - 10:30: Barbara Webb - “Biorobots: not just cricket?”
11:00 - 12:30: David Anderson - “Prototyping Cooperative Analog-Digital Signal Processing Systems
for Auditory Applications”
• Afternoon:
14:00: CNS Workgroup: Kwabena Boahen - “AER Transmitters and Receivers”
15:00: Vision Chips Workgroup: Tobi Delbruck - “Phototransduction in Silicon”
15:00 - 16:00: Bioethics Discussion Group
16:00: Locomotion Workgroup
17:00: BioBug Olympics Workgroup
19

Neuromorphic Engineering Workshop 2003
• Evening:
18:00: aVLSI Workgroup
18:00: Floating Gate Workgroup
19:30: Bioethics Discussion Group
20:00: Liquid Computing Workgroup
Tuesday 8 July
Hosts:
• Morning:
9:00 - 10:30: Rodney Douglas - “Tutorial: Basic Organization and Development of Cortex”
11:00 - 12:30: Chuck Higgins - “The Neuronal Basis of Dipteran Elementary Motion Detection”
• Afternoon:
14:00: Online Learning Workgroup
15:00: Audio Workgroup
15:00: Locomotion Workgroup
16:00: Multi-modality Workgroup
17:00: Roving Robots Workgroup
• Evening:
18:00: Bias Generators Workgroup
19:00: AVLSi Tutorial Workgroup
20:00 - 21:00: Practical Advice on Testbed Design Discussion Group
20:00: Locomotion Workgroup
Wednesday 9 July
Hosts:
• Morning:
9:00 - 10:30: Tony Lewis - “Visuomotor Coordination in Humans and Machines”
11:00 - 12:30: Ralph Etienne-Cummings - “Applying Ideas from Visual Image Processing to Sonar
Signal Processing: Making Every Ping Count”
• Afternoon:
14:00: CNS Workgroup: Giacomo Indiveri - “PCI-AER Board With Multiple Transmitters and Receivers”
15:00: Vision Chips Workgroup: Kwabena Boahen - “A Retinomorphic Chip With Four Ganglion-Cell
Types”
16:00: Locomotion Workgroup
17:00: BBQ
• Evening:
19:00: aVLSI Worktgroup
19:00: Floating Gate Workgroup
20:00: Teaching Neuroscience Discussion Group
20

Neuromorphic Engineering Workshop 2003
Thursday 10 July
Hosts:
• Morning:
9:00 - 10:30: Ania Mitros - “Floating Gates: Probability Estimation, Mismatch Reduction, and Machine
Learning”
11:00 - 12:30: Bernabe Linares - “Some Interesting Low Power Techniques for Analog VLSI
Neuromorphic Systems”
• Afternoon:
14:00: Online Learning Workgroup
15:00: Audio Workgroup
16:00: Multi-modality Workgroup
17:00: Roving Robot Workgroup
• Evening:
18:00: aVLSI Workgroup
Friday 11 July
Hosts:
• Morning:
9:00 - 10:30: Mark Tilden - “Telluridestine: Brief History and Update on the Second Neuromorphic
Mass-Market Monster”
11:00 - 12:30: Malcom Slaney - “Computational Audition: Correlograms, CASA and Clustering”
• Afternoon:
14:00: CNS Workgroup: Paul Merolla - “Word-Serial AER for Multichip Systems”
15:00: Vision Workgroup: Andre Van Schaik - “Marble Madness: Designing the Logitech Trackball Chip”
16:00: Locomotion Workgroup
• Evening:
18:00: aVLSI Workgroup
Saturday 12 July
Hosts:
• Morning:
9:00 - 10:30: Andre van Schaik - “Sound Localisation: Psychophysics and Applications”
11:00 - 12:30: Mitra Hartmann - “Sensory Acquisition with Rat Whiskers”
• Afternoon:
14:00 - 17:00: BioBug Olympics Workgroup
16:00 - 19:00: Hero of Alexandria’s Mobile Robot Workgroup
• Evening:
18:00: Bernabe Linares - “Some Interesting Low Power Techniques for Analog VLSI Neuromorphic
Systems - Part 2”
20:00: Round Table Discussion: Tradeoffs Between Detail and Abstraction in Neuromorphic Engineering
21

Neuromorphic Engineering Workshop 2003
Sunday 13 July
• Free Day!
Monday 14 July
Hosts:
• Morning:
9:00 - 10:30: Tim Pearce - “Chemosensory Systems on the Brain”
11:00 - 12:30: Hiroshi Kimura - “Biologically Inspired Legged Locomotion Control of Robots”
• Afternoon:
14:00: CNS Workgroup
15:00: Vision Chips Workgroup: Ralph Etienne-Cummings - “Focal-Plane Image Processing Using
Computation on Read-Out”
16:00: Locomotion Workgroup
• Evening:
18:00: aVLSI Workgroup
19:30: Bioethics Discussion Group
Tuesday 15 July
Hosts:
• Morning:
9:00 - 10:30: Robert Legenstein - “A Model for Real-Time Computation in Generic Neural Microcircuits”
11:00 - 12:30: Steve Greenberg - “A Mulit-modal, Syllable-centric Framework for Spoken Language”
• Afternoon:
14:00: Online Learning Workgroup
15:00: Audio Workgroup: David Anderson - “Design of a Hearing Aid”
16:00: Multi-modality Workgroup
17:00: Roving Robots Workgroup
• Evening:
18:00: Bias Generator Workgroup
19:00: Future of Neuromorphic VLSI Discussion Group
20:00 - 21:30: Kevan Martin - “”
Wednesday 16 July
Hosts:
• Morning:
9:00 - 10:30: Timmer Horiuchi - “Microchipoptera: Analog VLSI Sonar Doodads”
11:00 - 12:30: Brian Scassellati - “Using Anthropomorphic Robots to Study Human Social
Development”
• Afternoon:
22

Neuromorphic Engineering Workshop 2003
14:00: CNS Workgroup
15:00: Vision Chips Workgroup: Particitpant Talks
∗ 15:00 - 15:20: Ning Qian - “A Physiological Model of Perceptual Learning in Orientation
Discrimination”
∗ 15:30 - 16:00: Bernabe Linares -
“EU Project CAVIAR on Multi-layer AER Vision”
16:00: Locomotion Workgroup
17:00: BBQ
• Evening:
19:00: aVLSI Workgroup
20:00: Discussion Group: The Present and Future of the Telluride Workshop - What are we doing here?
Thursday 17 July
Hosts:
• Morning:
9:00 - 10:30: Jochen Braun - “Attentional Changes to Visual Representations”
11:00 - 12:30: Giacomo Indiveri - “Winner Take All Circuits for Models of Selective Attention”
• Afternoon:
14:00: Online Learning Workgroup
15:00: Audio Workgroup
16:00: Multi-modality Workgroup
17:00: Roving Robots Workgroup
• Evening:
18:30: Bioethics Discussion Group (going out to dinner, meet by the coffee room)
Friday 18 July
Hosts:
• Morning:
10:00 - 12:00: Workgroup Presentations
• Afternoon:
12:30 - 1:00: nEUro-IT Network: US-EU interaction
∗ A European network at the interface between cognitive neuroscience and information technology
14:00 - 16:00: Workgroup Presentations
• Evening:
19:30...: Final Dinner! Skits, food, merriment!
Saturday 19 July
Hosts:
• Morning:
9:00 - 12:30: PACKING AND LOADING PARTY
• Afternoon:
13:30: cleanup / vacuming
• Evening:
23

Neuromorphic Engineering Workshop 2003
Sunday 20 July
Hosts:
• Morning:
Before 10:00: Check out of Condo
24

Chapter 3
Tutorials
3.1
Analog VLSI (aVLSI) Tutorial
Leaders Giacomo Indiveri and Elisabetta Chicca
The purpose of the aVLSI tutorial was to give an introduction to the operation of analog integrated
circuits, their underlying physics, and the technology used for manufacturing such circuits. Furthermore,
participants were introduced to the methods and tools that are used to design such circuits. The tutorial
was mainly directed at people with little or no background in electronics, but everybody was welcome to
attend. Sessions on theory, simulation, and characterization were interlaced, such that each session was
entirely devoted to one of these four topics. This modular structure allowed participants to only attend the
parts of the tutorial they were unfamiliar with and interested in. The theory part started with an introduction
to semiconductor device physics by Ania Mitros (Floating Gate Tutorial). Then the most important basic
multi-transistor circuits were introduced and analyzed, showing how these circuits can be used as building
blocks of hardware models of neural circuits at different levels of abstraction. Some of the circuits introduced
in the theoretical part were simulated and characterized in consecutive sessions, such that participants could
acquaint themselves with particular circuits in different ways. For circuit simulation a software environments
was introduced, namely a public domain package allowing interactive simulation of small circuits. For
circuit testing, prefabricated test chips and four test setups were provided. Finally, participants were taught
the basics of CMOS circuit fabrication and mask layout.
Tutorial sessions were held almost every day of the workshop over its entire three-week duration. It
started out with about 12 participants, and ended with 6 very dedicated ones. Depending on people’s interests
and collisions with other activities the attendance varied from session to session. Care was taken to minimize
conflicts with other tutorials and to give people the opportunity to catch up on missed sessions where such
conflicts were unavoidable.
Participants Feedback
“I thought the aVLSI class was okay. The lectures were good, but without much background in
circuits and such scarce extra time to read up on them (due to the demands of other projects), it
was a little hard to keep up with. Also, I was not able to apply directly what I was learning in
the aVLSI workgroup to my other workgroup projects, so that also made it a little more difficult
to really understand. The labs were useful to learn how to use the equipment and to apply
25

Neuromorphic Engineering Workshop 2003
the lecture material. I do think it does make sense to pack up the equipment for the labs and
just having lectures and simulations would not be as good. The instructors were very open to
questions both during and outside of lab and lecture.
Since most people bought the book, I think it would have been helpful to assign or at least
recommend some pages to read before each lecture. Also giving handouts of the slides at the
beginning of lecture would be useful so we can spend time in class understanding instead of
busily copying down the information shown on the slides.”
“The aVLSI tutorial is obviously one of the core features of this workshop. It is surprising how
much knowledge one can gain in such a short time.
I believe that training is most important. I would therefore suggest to hand out more circuit
puzzles to solve at home.
Another suggestion: What about one lab session where participants get some practical relatively
easy problem? Something like, we want a circuit that does this and this. After a solution was
found, one could wire it up and test it.”
“I think that the tutorial is very good, quite systematic and well presented. The labs are abso-
lutely very useful to me.
Some suggestions:
1. project on circuit design and layout using Tanner
2. tutorial description in more detail on the web so that the participants have a clearer idea
what to expect and do some preparation before coming to Telluride. “
“Sorry that I dropped out of the VLSI course because of the two vision projects I was involved.
For the lectures and labs I did attend, I think they were well organized and presented. I didn’t
have much engineering/electronics training so the lectures were not easy for me, but I managed
to get a rough picture. The analog simulator was difficult to use, however.”
“I think my feedback will be biased since I had seen some of the circuits already and had a
basic understanding of MOSFET before the workshop, just not at the subthreshold level. So
personally I think the lectures we too easy but I think given the entire audience they were not
too easy and I still got something all of them except the last one on layout. I think the lab part
is essential. At least at UF there is not course you can take in graduate school which has labs.
Basically you get your chips back and test them but no basic labs. I find that labs help you
remember the conceptual topics better. The labs are better than simulation because you can see
how mismatch affects the circuit rather than just talking about it.”
26

Neuromorphic Engineering Workshop 2003
3.2
Floating Gate Circuits Tutorial
Leader Ania Mitros
The floating gate tutorial consisted of four lectures explaining the function and sample applications of
floating gate transistors. These devices are well-suited for use as long-term non-volatile analog memories
and also adaptation or learning with arbitrarily long time constants.
Roughly seven to nine people attended each tutorial lecture. The participants included those with no
previous experience with floating gates, as well as a few who had used these devices. Some of the latter
were looking for either an update on recent developments in the field; a refresher course on the devices; or
had specific questions to discuss. Of those who had never used the devices, at least two expressed interest in
applying floating gate devices to their circuits within a specific current project.
The first lecture reviewed energy band diagrams and the function of a normal MOSFET (metal oxide
semiconductor field effect transistor). The second lecture used band diagrams as a tool to explain quantum
tunneling and hot electron injection. These two phenomena are used to erase and program the floating gates.
The structure of a floating gate transistor and layout issues were discussed. The third lecture focused on
techniques for controlling how much charge is injected onto or tunnelled off of the floating gate. The fourth
lecture presented specific examples of floating gate circuits used for learning, offset reduction, and mismatch
reduction.
All students received a copy of notes covering the content of the first two lectures. They also received
copies of the slides used in the latter two lectures and publications covering the sample circuits presented.
The list of reference publications handed out and all the slides from the 3rd and 4th lectures are available
at: http://www.klab.caltech.edu/ ania/research/Telluride2003/
3.3
Online learning tutorial
Leader Nici Schraudolph
The goal of this tutorial was to bring more learning and adaptation to Telluride’s robots and aVLSI chips.
The focus on online learning, that is, learning while behaving, is appropriate for both neuromorphic robots
and chips in that it reflects their situatedness in a noisy, non-stationary environment.
The tutorial was structured to open with lectures on online learning in the first week, moving into project
work thereafter. The projects were mostly based in other workgroups, with their learning aspects informed
by and coordinated through the online learning tutorial.
The first tutorial, called “Statistics on the Fly”, presented techniques (implementable as either electronic
circuits or computer algorithms) for taking simple statistical measurements of a stream of data online. These
ranged from simple running averages on to higher moments and percentiles. The latter attracted particular
interest since they provide a robust alternative to moment-based methods, and spawned two projects: a
percentile circuit to adjust the threshold of BCM neurons (Section 3.3), and a very simple speech detector
based on a kurtosis circuit (Section 3.3).
The second tutorial, “Rapid Stochastic Gradient Descent”, explained how models can be fit to data online
by gradient descent. Since advanced gradient methods (such as conjugate gradient, BFGS, or Levenberg-
Marquardt) do not work online, first-order gradient descent with an adaptive step size for each system pa-
rameter is left as the best alternative. Several algorithms for local step size adaptation were discussed. This
tutorial inspired two projects to implement the most advanced of these methods, stochastic meta-descent
27

Neuromorphic Engineering Workshop 2003
(SMD). Due to the complexity of SMD, however, these projects could not be completed within the time
frame of the workshop; they will be continued in follow-up work.
Participation was higher than in the previous year: 17 people signed up for the workgroup, of which 12
became regular participants. Five projects were commenced in the project phase, and though some ran out
of time (as is wont to happen with a three-week horizon), this is a large improvement over the previous year,
which saw no online learning projects at all. Next year I would like to add a tutorial on Kalman filtering, and
prepare joint projects with workgroups such as Roving Robots and Locomotion.
Percentile Circuit for Setting BCM Thresholds
Participant: Guy Rachmuth
Synaptic plasticity is thought to be the cellular basis for memory and learning. An aVLSI chip imple-
menting synaptic plasticity has been implemented as part of neuromorphic project being done at my lab at
MIT. The learning rule is based on the Bienenstock-Cooper-Munro (BCM) curve, which has a global vari-
able that is used to measure the sum of the synaptic weights of all synapses onto a specific neuron, and is
used to adjust a plasticity threshold. However, it is not clear how to have this value implemented as a signal
on the chip, and under what rule to change its value.
During the online learning workshop, it became apparent that one way to implement the control of this
variable is to use a percentile function that sums all of the weights of the synapses, and in turn adjust its
global value such that the thresholds of potentiation and depression across the synapses are adjusted to keep
the synapses stable. If a large number of synapses have potentiated, the threshold for potentiation is shifted
such that it is harder for them to continue to potentiate. In this way, stability is maintained and the synapses
do not all saturate at the low or high levels.
Using this concept, a circuit implementation of the percentile function was proposed. The circuit samples
spatially distributed synapses, figures out their weights, and adjusts the global variable to make sure that a
predetermined percentage of synapses are depressed or potentiated. This project will be pursued in my home
lab at MIT, and I hope to have a working aVLSI implementation working by next year for Telluride’04.
A Simple Circuit for Online Kurtosis Measurement
Participants: Andre van Schaik
Nici Schraudolph
We wanted to explore the feasibility of measuring the kurtosis (fourth moment) of a signal online using
the simple methods presented in the “Statistics on the Fly” tutorial. Direct computation of kurtosis would
involve the fourth power of the audio signal and would therefore be highly susceptible to outliers. In order to
avoid this problem, we high-pass filtered and then rectified the signal; this translates kurtosis into skew (the
third moment). We then measured the skew indirectly by means of a circuit that gives the percentile of the
mean of the rectified signal. A high percentile indicates high skew, and thus a kurtotic input signal.
The circuit was constructed on a breadboard (see Figure 3.1); tests on a variety of audio signals confirmed
that it worked as intended: an LED lit up when the kurtosis of the input signal exceeded a given threshold.
We did find, however, that the circuit is also sensitive to the envelope of the signal; we are now contemplating
means to eliminate this confounding factor.
28


Neuromorphic Engineering Workshop 2003
Figure 3.1: A simple circuit that measures the kurtosis of signals online.
Spatial Representations from Multiple Sensory Modalities
Participant: Reto Wyss
This project was based in the Multimodal Workgroup; see Section 9 for a full descriptions. In the
following, we focus on the project’s learning aspects.
We optimized a hierarchical network in terms of sparse coding and decorrelation. One important con-
sideratioin was that learning should be online in order to allow the system to adapt to the statistics of its
environment. Furthermore, with online learning, the system could also adapt to dynamically changing envi-
ronments.
Given a cell Ai within the network, the objective function of which was to be optimized is given by
( dAi )2
2
O = −
dt
t −
corr
var
t(Ai, Aj )
t(Ai)
i
i=j
where vart(Ai) is the variance of cell i and corrt(Ai, Aj) is the correlation between cells i and j, both
over time. The two different components in this formula account for the fact, that 1) each cell should show
smooth activation over time and 2) the cells within a group are maximally decorrelated, such that they all
code for different multimodal features. The statistical measures used in O, i.e. the average, the variance
and the correlations over time were computed online based on the exponential average. As an example, the
temporal average of x(t) can be computed by the following iterative formula:
1
1
x t+1 = x(t) + 1 −
x
τ
τ
t
where τ is the characteristic time constant over which the average runs.
In a first attempt, the objective function O was maximized using a simple gradient ascent method. Due to
relatively slow convergence, however, we tried to implement online local step size adaptation, in particular
29

Neuromorphic Engineering Workshop 2003
the SMD learning algorithm developed by N. Schraudolph. Unfortunately, until the end of the workshop, this
new algorithm had a strong tendency to diverge for as yet undetermined reasons. We suspect that the Hessian
of our objective function (which is used implicitly by SMD) may not always be positive semi-definite, which
could cause the observed divergent behavior. We will investigate this issue in follow-up work.
Furthermore, it appears that while SMD is very well suited for learning the first layer in the hierarchical
network, which receives highly correlated sensory input, it shows much weaker performance in the layers
thereafter. This could be due to the fact that higher layers receive already highly decorrelated inputs, negating
some of SMD’s advantages. Also, the higher layers face additional non-stationarities due to the continuous
learning of the lower layers; this is not yet taken into account.
Blind Signal Separation and Deconvolution with Online Step Size Adaptation
Participant: Milutin Stanacevic
The aim of the project was implementing online adaptation of learning rate in blind signal separation
problem. The task is to separate statistically independent sources from observed linear static mixtures. The
adaptive algorithms for separation are derived from stochastic gradient descent optimization of a perfor-
mance metric that quantifies some measure of independence of the reconstructed sources. We are interested
in algorithms based on temporal structure of sources and as cost function we choose sum of squares of
non-diagonal elements of multiple time-delayed covariance matrices. The stochastic-meta descent (SMD)
method for online adaptation of local learning rate was used. It uses full second-order information while
retaining O(n) computational complexity due to efficient Hessian matrix vector multiplication. The on-line
update of unmixing matrix, local learning rate and gradient trace was derived. The learning algorithm was
used to separate two artificially mixed speech signals. The algorithm showed divergence due to non-positive-
definite Hessian matrix at certain time steps; this problem will be addressed in ongoing work.
Learning on high-dimensional liquid states
Participants: Steven Kalik
Robert Legenstein
Learning on high-dimensional input with a small training set is problematic in terms of the ability of
the learner to generalize. Even without online learning algorithms, several such generalization-related prob-
lems arose in the Computing with Liquids workgroup. Briefly, the task in this group was to classify multi-
dimensional spike trains produced by a visual aVLSI system. The spike trains are responses to moving
gratings presented to a silicon retina. A description of the experimental set-up can be found in the “comput-
ing with liquids” project, under the “Configurable Neuromorphic Systems Project Group” chapter 5. Since
we want to classify the direction of the grating’s movement from the response of the system, temporal inte-
gration of the visual information is needed to perform this task.
As far as learning is concerned, we use a simple linear classifier to classify high dimensional vectors
(on the order of 3000 to 7000 dimensions). There are several ways to map spike trains (point processes or
sequences of point events) onto static vectors to describe the state of the system at any given time. One
obvious way is to measure the activity of a neuron within a small time bin. (We used the number of spikes
in a one millisecond time bin.) Another way is to convolve the spike train with some kernel function. For
this convolution, we used an exponential decay function with a time constant of τ =30 milliseconds. This
distributes fractional amounts of each spike over several one millisecond bins in a row. Such a distribution
30

Neuromorphic Engineering Workshop 2003
mimics the effect of a spike train on the meberane potential of a post-synaptic cortical neuron. Note that this
approach induces some “artificial” memory into the system, which is not a reflection of short term memory
in the chip.
Several problems arose when we trained the linear readout system to discriminate between leftward and
rightward moving gratings:
High dimensionality The costly data aquisition process made it impossible to generate large amounts of
data. Although many time steps could be extracted from each movie presentation, it turned out that
generalization across different movie presentations was difficult. To avoid overfitting, we randomly
sampled 200-300 neuronal responses out of the 3000 to 7000 dimensions. This method worked sur-
prisingly well. Such a selection process mimics the common neurophysiological methods,in which
microelectrodes are lowered into the cortex to record from a small fraction of the neurons in a cortical
region.
Learning algorithm Linear regression (LR) performed acceptably on test data when sampling 70 to 150
neurons out of several thousands. Mostly, LR relied on the output of a few neurons (only two neurons
in extreme cases), which were weighted very strongly. We therefore considered an alternative linear
classifier, the Fisher’s Linear Discriminant, to computate the hyperplane. This method outperformed
LR. It handled up to 300 dimensions with good generalization on test data. For an example comparison
of the weight vectors and test errors from the two discriminator.
Temporal integration in the chip Temporal dynamics in the chips used in these experiments were very
fast. Spike frequencies counted on large windows (10 milliseconds) were not well separated with
respect to the direction of the grating’s drift. With smaller windows (one millisecond) we were more
successful, achieving a test error of about 28 %. This shows that there is some temporal information in
the state of the circuit. However, using an exponential kernel on the spike trains (as described above)
significantly improves the performance of the classifiers. With this technique, we could reduce test
error to about 5-10 %.
31

Chapter 4
The Auditory Project Group
Project Leader Andr´e van Schaik
The Auditory Systems group had a record number of participants and projects this year. A strong ef-
fort this year was made in the area of biologically inspired software for sound processing. We ran projects
on Noise Suppression, Speech Spotting (detection), and Sound Classification. These topics are particularly
important for neuromorphic engineering as traditional signal processing methods in these areas have con-
tinually failed to obtain the results desired. David Anderson obtained some impressive results in the Noise
Suppression task, using a method inspired by saliency detection in the human visual system. We tried three
different Speech Spotting approaches. The first algorithm, incorporating the shortest time scale, is the ”spec-
tral flux” measure, which computes the short-term change in spectral energy over critical-band channels (1/3
octave wide), analogous to processes that are likely to occur in the auditory periphery (cochlea and auditory
nerve). The second algorithm, inspired by the human’s sensitivity for the detection of pitch, uses a detection
of harmonicity to indicate the presence of speech. Finally, the third sytem, using the longest time scale, is
based on models of the auditory cortex. In the Sound Classification task we compared the performance of
two classifiers using ”traditional” classification features with a system based on the auditory cortex model.
Both the Speech Spotting projects and the Sound Classification projects are likely to continue outside the
Telluride Workshop and lead to ongoing collaboration.
On the hardware side, we tested two new Sound Localization systems using custom neuromorphic ICs at
Telluride. Both systems take their inspiration from biology. The first system uses spatio-temporal gradient
sensing, whereas the second system is based more on the mammalian auditory system and uses models of
the cochlea to detect the equivalent of Interaural Time Difference. Both these systems are part of an ongoing
evolution towards a final smart acoustic MEMS system. Both ICs and systems have been show to work
successfully. We also tested a new AER EAR containing two silicon cochlea with an AER interface. This
circuit is the result of a collaboration that started last year at Telluride and has been supported by an RCN
grant from the INE.
4.1
Noise Suppression
Participants: David Anderson
In this section we describe biologically–inspired audio noise suppression (or denoising) using “salient”
32

Neuromorphic Engineering Workshop 2003
Noisy and Extracted Signals
1
0.5
0
−0.5
−10
0.5
1
1.5
2
2.5
Noisy Signal
6000
5000
4000
3000
Frequency 2000
1000
00
0.5
1
1.5
2
Extracted Signal
6000
5000
4000
3000
Frequency 2000
1000
00
0.5
1
1.5
2
Time
Figure 4.1: Noise suppression example.
33

Neuromorphic Engineering Workshop 2003
auditory features. The salient features are extracted as described above for the corresponding speech detec-
tion algorithm. Suppression of background noise in corrupted audio signals has been a topic of research for
decades; however, the method described below seems to perform at least as well or better than any method
with which we are familiar. The general approach is:
1. Find the salient features or portions of speech using the speech detection based on salient features.
2. Subtract the background (noise) energy in each subband from the corresponding subband envelope.
(a) When a signal is determined to be present, the amount of noise removed is decreased so as to
preserve any residual signal.
(b) The amount of suppression of the subband envelopes is a non-linear function of the saliency and
the noise energy chosen to aggressively suppress noise when no signal is present but to allow
detected signal or salient features to survive intact.
(c) Typical artifacts associated with noise suppression are reduced and nearly eliminated by con-
trolling the adaptation rate in each subband according to envelope bandwidth of each cochlear
critical band.
3. The modified subband signals with are then summed.
The performance of a noise suppression system is difficult to characterize without extensive tests—
something that we didn’t have time for at the workshop. However, informal listening tests were very promis-
ing. The algorithm effectively removed noise at SNRs down to -17 dB (depending on the noise type). An
example is shown in Fig. 4.1.
4.2
Speech Spotting in a Wide Variety of Acoustic Environments Using Neu-
romorphically Inspired Computational Algorithms

Leader Steven Greenberg
Participants: David Anderson
Sven Behnke
Steven Greenberg
Nima Mesgarani
Sourabh Ravindran
Malcolm Slaney
The Problem:
Speech is perhaps the primary medium of communication among individuals, and is often transmitted in
less than ideal acoustic conditions. Reverberation and various forms of background noise often accompany
speech in many natural settings, posing a significant challenge to accurate decoding of information contained
in the speech signal, particularly for individuals with a hearing impairment, as well as for automatic (i.e.,
machine) recognition systems. One means by which to ameliorate the effects of background noise on verbal
interaction is through development of automatic methods to detect and temporally demarcate speech in noise.
Such methods should be capable of ”spotting” speech embedded in a diverse array of acoustic backgrounds
over a broad range of signal-to-noise ratios representative of the real world.
34

Neuromorphic Engineering Workshop 2003
The Task:
Speech material (TIDIGITS corpus) incorporating spoken digits (one through nine, plus zero) spoken by six
different speakers (equally divided among male and female talkers) was embedded in a variety of acoustic
backgrounds taken from a standard corpus (NoiseX) over a broad range of signal-to-noise ratios (- 17 to
+8 SNR, in 5-dB increments). The background environments ranged from speech babble (representative of
a restaurant setting) to noise generated by boats and airplanes to factory noise and machine-gun fire. The
speech spotting corpus was put together in Telluride by Nima Mesgarani. A complete list of the background
environments used for training and testing are given below. The speech signal consisted of a single spoke
digit (whose duration ranged between 300 ms and 800 ms) embedded in 2.5 seconds of acoustic background.
The location of the speech within the acoustic background varied randomly, with the restriction that the
background began at least 50 ms prior to the speech and ended at least 50 ms after termination of the spoken
digit. Thus, the automatic algorithms had no foreknowledge of the temporal location of the speech within
the noise backgrounds. The speech spotting algorithms were charged with delineating the temporal location
of the speech signal within the background and estimating the digit’s arithmetic center (or ”peak”). The
digit’s estimated peak location was quantitatively assessed relative to the ”true” boundaries of the speech
signal. If the estimated peak location was within the boundary of the digit the algorithm was credited with
a ”hit”. Otherwise, the speech spotting was scored as a ”miss”. The performance of each algorithm was
assessed in terms of the percentage of the times the spotter correctly assigned the digit to the appropriate
location within the background sound. Some of the algorithms were trained on a variety of the background
environments, speech material and speakers, and tested on backgrounds, speech material and speakers that
were not present in the training material in an effort to ascertain the efficacy of the training regime (and the
ability to generalize to comparable, but unseen material). Other algorithms were intentionally untrained prior
to spotting the speech as a means of determining the impact of training on the efficacy of the algorithms.
SNR
Noise Source
8 dB
Buccaneer Jet
3 dB
Factory
-2 dB
High Frequency Channel
-7 dB
Leopard Tank
-12 dB
Destroyer Engine
Pink Noise
Restaurant Noise
White Noise
Training Data Conditions
SNR
Noise Source
3 dB
Multi-talker Babble
-2 dB
Factory
-7 dB
Volvo
-12 dB
Machine Gun
-17 dB
Destroyer Operations Room
F-16 Fighter Jet
M-109 Tank
Buccaneer Jet
Testing Data Conditions
35









Neuromorphic Engineering Workshop 2003
bbl
bcnr2
dest
m109
6
6
6
6
5
5
5
5
4
4
4
4
3
3
3
3
Frequency / kHz
Frequency / kHz
Frequency / kHz
Frequency / kHz
2
2
2
2
1
1
1
1
0
0
0
0
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
Time / s
Time / s
Time / s
Time / s
fctr2
f16
mgun
volvo
6
6
6
6
5
5
5
5
4
4
4
4
3
3
3
3
Frequency / kHz
Frequency / kHz
Frequency / kHz
Frequency / kHz
2
2
2
2
1
1
1
1
0
0
0
0
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
2.5
0
0.5
1
1.5
2
Time / s
Time / s
Time / s
Time / s
Figure 4.2: NoiseX test noises mixed with a spoken digit at SNR -17.
The Algorithms:
The algorithms used for speech spotting were chosen to exemplify a range of approaches and time scales
representative of auditory-based processing strategies. At the most peripheral level, incorporating the short-
est time scale, is the ”spectral flux” measure, which computes the short-term change in spectral energy over
critical-band channels (1/3 octave wide), analogous to processes that are likely to occur in the auditory pe-
riphery (cochlea and auditory nerve). This algorithm was developed by Professor David Anderson for other
applications and was applied to the speech spotting task in an effort to determine its efficacy as a speech
detector in noise. This algorithm did not require training prior to its application to the corpus materials. A
second algorithm, developed by Sven Behnke, computed the harmonicity associated with the acoustic signal
as a means of spotting speech in various backgrounds. This algorithm computed the magnitude of the har-
monicity as a function of time as a means of determining the temporal location of the speech embedded in
the background. The time scales employed are longer than those used by the spectral flux algorithm, and are
representative of auditory brainstem processing. A third algorithm, based on the cortical model developed
by Professor Shihab Shamma and colleagues, computed the low-frequency (below 20 Hz) modulation char-
acteristics of the signal across the acoustic (tonotopic) frequency axis over a range of spectral resolutions
(”scale”). The signature of speech lies in the range of ca. 4 cycles per octave (i.e., the critical bandwidth) for
modulation frequencies between 3 and 12 Hz. The application of this model to the speech spotting corpus
was performed by Nima Mesgarani. In one version the tonotopic plane was granulated into 128 channels
(RSF), while in a separate application (RS) all channels were lumped together. The algorithms were trained
on half of the corpus material (with respect to background environments, speech material) using four of
the six speakers (2 male and 2 females). In some training conditions the algorithms were trained on back-
grounds whose SNRs ranged between -12 and +8 dB; in others the SNR of the training backgrounds ranged
between either -7 and +8 dB or between -2 and +8 dB. The intent was to ascertain whether the dynamic
range of the background SNR used to train the material influences the efficacy of the training. In a separate
application, the RS algorithm was applied to full range of testing materials in untrained mode. The intent
36

Neuromorphic Engineering Workshop 2003
was to determine whether training on representative corpus materials substantially improved speech spotting
performance (or not). A fourth algorithm, developed by Malcolm Slaney and Eric Sheier several years ago
for music/speech discrimination and classification, was applied to the corpus. This approach utilized a broad
range of acoustic features ranging in time scale and complexity, and required extensive training on the corpus
materials to work.
Algorithm 1 (by David Anderson)
The first approach that we used for speech detection involved looking for significant or perceptually salient
features in an audio signal. The inspiration for this came from work presented at the 2003 Telluride Neuro-
morphic workshop on salient visual features. As in video systems, there are many things that can contribute
to the saliency of different portions of an audio signal. In this case the following steps were taken:
1. Perform cochlear filtering on the input signal (the signal is divided into 1/3–octave bands).
2. Estimate the background signal level in each band and the variability of the background signal in each
band. (This is done by learning the average minimum excitation in each band.)
3. Look for significant differences in the spectrum from the background noise.
(a) Saliency can be triggered by a small deviation from the learned noise over many bands or by a
larger deviation in a single band.
(b) When a salient feature is found, the threshold is lowered for determining saliency in other fre-
quency bands.
(c) “Saliency” is represented as a continuous value between 0 and 1 rather than as a binary value.
4. Speech detection was approximated by finding the average energy in the salient features that corre-
spond to frequencies under 3 kHz. The energy of the “salient features” was obtained by finding the
energy in the output of the “noise suppression” system based on saliency detector. This is described in
a later section.
This system worked well for speech detection in stationary noises and especially for noise that did not
“swamp” the noise in all frequency bands. Performance was not good for the Machine Gun noise and other
noises with strong impulsive components because the detector was not designed to distinguish these from
speech. Results are summarized in Table 4.1.
Algorithm 2 (by Sven Behnke)
Motivation
Speech detection in noise can rely on different cues. In this study I focus on the harmonic structure of voiced
speech as a cue to distinguish speech from noise. This approach is motivated by the facts that about 80% of
speech is voiced and that voicing creates a characteristic signature in the spectrogram.
During voicing the glottis opens and closes rhythmically with a frequency (pitch) between 100 and 300
Hz. This yields harmonic structure when the frequency of the speech signal is analyzed. The spectrum
peeks at integer multiples of the fundamental frequency. The height of the peaks is determined by vocal tract
resonances that represent most of the linguistic message.
37

Neuromorphic Engineering Workshop 2003
6000
y
Original Noisy Signal
c 4000
n
e
u
q
r
e
F 2000
00
0.5
1
1.5
2
2
4
Initial Subband SNR Estimate
6
8
10
12
14
6000
y
c 4000
n
e
u
Salient Signal
q
r
e
F 2000
0
2.5
2
Saliency Measure
1.5
0
0.5
1
1.5
2
2.5
Time
Figure 4.3: Speech detection example.
m
o
o
t
l
e
n
e
b
u

R
s
r

J
b
a

G
r

n
e
e
e
y
e

B
r
y
i
n
t
i
o
n
h
9
a
c
t
o
o
h
0
t
r
o
r
a
c
e
c
l
v
c
6
s
e
c
e
a
o
a
-
1
-
1
e
p
u
p
F
V
M
F
M
D
O
B
S
3 dB
10
10
10
10
10
10
10
10
-2 dB
10
10
5
10
10
10
10
10
-7 dB
10
10
5
10
10
5
9
6
-12 dB
10
10
3
5
9
1
8
0
-17 dB
3
10
5
0
3
0
4
2
Table 4.1: Number of HITs for each noise–SNR combination on the test database. The total for each condition is out
of 10 samples.
38

Neuromorphic Engineering Workshop 2003
Spectrum
Harmonic coefficients
Non−harmonic coefficients
20
40
60
80
100
120
140
160
Figure 4.4: Pitch-scaled harmonic filtering. The analysis window has been matched to four times the pitch period. A
rectangular window has been applied, followed by a FFT.
For that reason, automatic speech recognition systems (ASR) try to compute features that represent the
spectral envelope, but are invariant to changes in pitch. This is achieved by analyzing the energy in different
frequency bands for signal windows of typically 25ms length. These windows typically decay smoothly
towards the sides to avoid aliasing effects. This approach has two drawbacks. First, the smooth windows have
a frequency response themselves. This leads to smearing of the harmonic energy over multiple neighboring
frequency bins. Second, since the binning of the frequency axis is predetermined, the multiples of the
fundamental usually fall between frequency bins. This makes the estimation of the peak height difficult.
To avoid the problems induced by analysis windows that are not matched to the fundamental frequency
Jackson and Shadle proposed recently a method that they called pitch-scaled harmonic filtering (PSHF)
[Jackson, P.J.B. and Shadle, C.H. (2001) Pitch-scaled estimation of simultaneous voiced and turbulence-
noise components in speech. IEEE Transactions on Speech and Audio Processing 9(7):713-726]. If one
manages to match the length of the analysis window to a multiple of the pitch cycle, the harmonic energy is
centered at known frequency bins, as can be seen in Figure 4.4. Furthermore, since the signal is now aligned,
one does not need to use a smooth window any more, but can apply a rectangular window. This avoids the
smearing induced by the smooth window.
Since the harmonic energy is now concentrated at selected frequency bins, the intermediate bins contain
energy that is non-harmonic or has a different fundamental frequency. Because noise is frequently non-
harmonic or does not match the fundamental of the speaker, the local noise level can be estimated by looking
at the intermediate frequency bins. This estimate can be used for spectral subtraction, yielding the spectral
envelope of the voiced speech.
To make PSHF work, the fundamental frequency must be estimated for all points in time. Several
methods for pitch estimation exist. Here, I use the degree of harmonicity that is computed for different
window lengths at regular points in time as a local indicator for the presence of the fundamental frequency.
39


Neuromorphic Engineering Workshop 2003
Figure 4.5: Overview of voiced speech detection.
These local measurements are combined by dynamic programming to form a continuous pitch estimate. The
dynamic programming takes into account that the pitch does not change very quickly and that it does not
deviate much from the mean pitch during an utterance.
Algorithm
Figure 4.5 gives an overview of the developed voiced speech detection algorithm. In a first step, the mean
spectrum and the mean energy for each frame are subtracted from the magnitude spectrogram, shown in
Figure 4.6(a). Furthermore, the lowest frequency bins are attenuated. The reconstructed time-domain signal
contains only the time-frequency components that stand out from the mean, as shown in Figure 4.6(b).
This signal is analyzed every 5ms with rectangular FFT windows ranging from 150 to 600 samples,
corresponding to a fundamental frequency range from 80Hz to 320Hz. Dynamic programming is used to
find a valid pitch trajectory, as illustrated in Figure 4.7.
The pitch estimates are used to separate the harmonic from the non-harmonic energy. The non-harmonic
coefficients next to a harmonic coefficient provide an estimate of the local noise level. It is subtracted from
the harmonic coefficient to produce an estimate of the spectral envelope of the voice. Since the pitch estimate
changes over time, the frequency resolution of the FFT changes as well. To unwarp the frequency axis, the
variable length spectrum is transformed into the cepstral domain. The signal is smoothed by setting the
higher cepstral coefficients to zero. Then, it is transformed back to the spectral domain, but this time with a
constant window length of 50 bins.
The resulting voiced spectrogram is shown in Figure 4.8(a). It is now passed through a spectro-temporal
band-pass filter to match the slow modulations of typical speech signals. The energy of the response, shown
in Figure 4.8(b), indicates the presence of voiced speech in the time-frequency plane. This signal is ag-
gregated for all points in time by adding the sum of all frequency magnitudes to the maximal frequency
magnitude that has been multiplied by 10.
Figure 4.9 shows the resulting function that indicates the presence of speech over time. The maximum
40




Neuromorphic Engineering Workshop 2003
6
6
5
5
4
4
3
3
Frequency / kHz
Frequency / kHz
2
2
1
1
0
0
0
0.5
1
1.5
2
0
0.5
1
1.5
2
(a)
Time / s
(b)
Time / s
Figure 4.6: Spectral subtraction: (a) f16 noise with SNR -7 dB, speaker m3, digit 7; (b) after spectral subtraction.
200
250
300
350
400
Windowlength
450
500
550
600
50
100
150
200
250
300
350
400
450
Frame
Figure 4.7: Pitch estimation. Voiced speech is detected near frame 115 and window length 345 (approx. 139Hz).
41

Neuromorphic Engineering Workshop 2003
6
6
5
5
4
4
3
3
Frequency / kHz
Frequency / kHz
2
2
1
1
50
100
150
200
250
300
350
400
450
50
100
150
200
250
300
350
400
450
(a)
Frame
(b)
Frame
Figure 4.8: Voiced spectrogram: (a) reconstructed using estimated pitch; (b) energy of spectro-temporal band-pass
filter.
−11
x 10
5
4.5
4
3.5
3
2.5
2
Voiced speech indicator
1.5
1
0.5
50
100
150
200
250
300
350
400
450
Frame
Figure 4.9: Voiced speech indicator.
42



Neuromorphic Engineering Workshop 2003
Figure 4.10: Detection rate aggregated in different dimensions.
Figure 4.11: Detailed chart of detection rates and detection accuracy.
of this function is the result of the speech detection. Since the maximal voiced energy does not occur in
the middle of the digits, the average offset between the two has been estimated from the training set to be
90.8ms. The maximum is shifted by this amount before it is compared to the digit’s position.
Results
The detection performance is evaluated according to two criteria. The first criterion measures how often
the detected position lies within the interval described by the beginning and the end of the inserted digit.
Figure 4.10 shows the average detection rates for different SNRs, voices, noises, and digits. The localization
is perfect for 3dB SNR. As expected, the localization rate drops as the SNR is lowered to -17dB. It is
interesting to see, that different noises impair the detection performance differently. While Volvo noise has
no negative influence, Babble noise impairs the speech detection significantly. This is no surprise, since the
Babble noise is composed of speech and hence contains many harmonic components.
Figure 4.11(a) shows the detection rates in more detail. One can observe that for most noises the detection
performance breaks down significantly between -7dB and -12dB SNR. Part (b) of the figure shows another
performance measure, namely the rate of digits that have been detected within a certain distance of the digit’s
center. It can be observed that for 3dB SNR all digits have been localized within ±100ms of the digit’s center.
The rate of successful detections drops as the interval is shorted down to ±10ms. Detection performance
also drops as the SNR is lowered to -17dB, most notably between -7dB and -12dB.
43

Neuromorphic Engineering Workshop 2003
Algorithm 3 (by Nima Mesgarani)
In this particular experiment a speech spotting algorithm based on Shihab Shamma’s auditory cortical model
was used to spot the location of a speech signal embedded in various acoustic environments (as described
in the methods section above). The signal-to-noise ratio of the backgrounds ranged between 17 and +3
dB. In one condition, the cortical model was NOT trained on any prior material and was applied ”blind” to
the files containing the speech embedded in noise (”N Train”). In other conditions the speech spotter was
trained using either ”rate” and ”scale” (RS) or ”rate”, ”scale”, and ”frequency” (RSF) versions of the model.
The primary difference between the two versions of the model pertains to the granularity of the tonotopic
frequency axis dimension. In the RSF version the tonotopic axis is partitioned into 128 channels, while in
the RS version the tonotopic frequency information is collapsed into a single channel. The training regime
for the RS and RSF models varied from being trained on a 20-dB dynamic range of conditions (-12 through
+8 dB) to a 15-dB dynamic range (-7 through +8 dB) to a 10 dB dynamic range (-2 through +8 dB dynamic
range). These conditions are designated as 12, -7, and 2 respectively on the graphs. The intent was to
ascertain whether training had a beneficial effect on the efficacy of the speech spotting algorithm, and if so,
whether the dynamic range of the training noises used had any effect.
Overall, the results from the auditory cortical speech spotting experiment indicate that:
1. The untrained algorithm performed as well as or better than any trained regime under all conditions.
At moderate SNRs (-7 dB and higher) the untrained system performanced considerably better than
all of the other regimes except for the RS model trained on a 10-dB dynamic range of backgrounds
(RS-2). At low SNRs (-12 or 17 SNR) the untrained system performed equally well or only slightly
better than the trained systems. Thus, training appears to be beneficial only at the lower SNRs and
even then, only marginally so. This suggests that for a task involving temporal localization of speech
in noise, training is often neither warranted nor desirable. The basic features of the RS cortical model,
which examines low modulation frequencies at various spectral resolutions, is sufficiently robust to
extract speech-like features at ca. 80% overall frame accuracy, and is capable of pinpointing speech in
various backgrounds with ca. 95% accuracy for SNRs of 7 dB or higher - an impressive result.
2. The superiority of the untrained system pertains across most forms of background environments (Ta-
ble 4.2 in which the numbers refer to ERROR rate, hence lower numbers reflect better performance).
For some noise backgrounds, such as the Volvo car, the untrained system does equal or better than
the trained systems for all SNRs. For most other backgrounds the untrained system equals or exceeds
the performance of the trained systems for SNRs of 7 dB or higher, often by a considerable degree.
Table 4.2 provides the detailed error patterns associated with each background and SNR for the two
trained and one untrained system.
3. Overall, the results suggest that speech spotting using low frequency modulations distributed across
different spectral granularities (the essence of both the RS and RSF models) is effective across a broad
range of backgrounds. The window length used in the current experiment was rather long (
ms), but was still effective in temporally localization of the speech (given that the frame rate was 8 ms
in order to provide some degree of temporal precision).
4.3
Sound Classification
Leader Malcolm Slaney
44




Neuromorphic Engineering Workshop 2003
(a)
(b)
(c)
Figure 4.12: Correct speech detection rates in different noises at different SNR levels. (a) Average over all SNRs; (b)
Low SNR; (c) High SNR
45

Neuromorphic Engineering Workshop 2003
Untrained
babble
bucanner2 destroyer
m109
factory
f16
m gun
volvo
(SNR)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
-17
81.8
36
72.7
9.09
36
72
54
9
-12
81.8
36
54.5
9.09
9
54
63.6
9
-7
54
45
18.18
9.09
9
9
36
9
-2
27.2
9
9
9.09
9
9
18
9
3
9
9
9
9
9
9
9
9
RS
babble
bucanner2 destroyer
m109
factory
f16
m gun
volvo
(SNR)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
-17
72.72
54
100
18.1
54
72
54
9
-12
27.2
54
54.5
9.09
45
54
45
18
-7
72.7
9
27.27
9.09
9
27
54
18
-2
27.2
9
9
9.09
18
9
9
18
3
9
9
9
18.18
9
9
9
9
RSF
babble
bucanner2 destroyer
m109
factory
f16
m gun
volvo
(SNR)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
(errors)
-17
72
45
90.9
27
81
72
36
9
-12
100
54
27.2
45.45
63
54
18
54
-7
90
54
27.27
27.27
45
54
18
45
-2
63
54
18.18
45.45
27
45
9
54
3
72.7
45
27
27
27
45
18
36
Table 4.2: Error rate at different SNRs for the three algorithms.
Participants: Sourabh Ravindran
Malcolm Slaney
Aims:
Understanding the acoustic world around us is a difficult task. One aspect of this problem is to make simple
determinations of the class of a sound that is heard. This might be useful in an environment with pervasive
computing, as a means to describe the location of the sensor. Or to provide a first cut at explaining the
acoustic environment so more specialized sensors (such as speech recognizers) can do there work on the
proper signals. This project studied whether a new set of features, based on neuromorphic features, could
improve on previous results.
The sound classification project had two specific goals: 1) To compare the performance of traditional
features used in signal processing with those derived from an auditory model for the task of audio classi-
fication. 2) To explore the possibility of optimally combining the two feature sets to obtain better results
compared to the set described above.
Database:
The task consisted of classifying test audio files into one of the four classes. Details of the database we used
are as follows:
Class 0 (Noise): 22 files (17 train, 5 test). 9926.26 seconds of data (7420.59 seconds of training data).
14 different types of noise were selected from the NOISEX database [http://spib.rice.edu/spib/select noise.html]
The noises used are:
46

Neuromorphic Engineering Workshop 2003
White noise
Pink noise
HF radio channel noise
Speech babble
Factory floor noise 1
Factory floor noise 2
Jet cockpit noise 1 (Bucaneer 1)
Jet cockpit noise 2 (Bucaneer 2)
Destroyer engine room noise
Destroyer operations room noise
F-16 cockpit noise
Military vehicle noise (M109)
Tank noise (Leopard)
Machine gun noise
Car interior noise (Volvo)
An additional 7 noises from the Aurora 2 database were used:
Car interior noise
Airport lobby noise (announcements, people talking, footsteps, etc.)
Exhibition (sounds like party noise)
Restaurant noise
Street
Subway
Train
See http://dnt.kr.hs-niederrhein.de/papers/asr2000 final footer.pdf,
http://eurospeech2001.org/files/asr2000 final footer.pdf, or
http://www.elda.fr/proj/aurora2.html for more information about the Aurora data.
Class 1 (Animals): 100 files (78 train, 22 test). 9174.65 seconds of data (7556.38 seconds of training
data). A random selection of animal sounds from the BBC Sound Effects audio CD collection
Class 2 (Music): 33 files (27 train, 6 test). 3923.19 seconds of data (3202.19 seconds of training data).
A random selection of music files from the RWC Genre Database was used. 1
Class 3 (Speech): 5260 files (4227 train, 1033 test). 9228 seconds of data (7383.39 seconds of training
data). A random selection of spoken digits from the TIDIGITS portions of the AURORA database.
We divided the entire database into separate training and testing sets. 80% of the data was used for
training, and 20% was used for testing. This procedure was repeated 20 times so we could get an estimate of
the variances in the test performance.
1We are grateful for the assistance of Masataka GOTO < > in acquiring a portion of the music genre
data at the last minute for this workshop. Masataka Goto, Hiroki Hashiguchi, Takuichi Nishimura, and Ryuichi Oka: RWC Music
Database: Music Genre Database and Musical Instrument Sound Database, Proceedings of the 4th International Conference on
Music Information Retrieval (ISMIR 2003), October 2003.
47
















Neuromorphic Engineering Workshop 2003

MFCCs
Learning
RSF
Training
Samples
Classifier
Class
Identity
Amplitude-
Histogram
Test
Samples
Figure 4.13: Overview of the sound classification architecture
48

Neuromorphic Engineering Workshop 2003
Features:
Three different feature sets where used to compare their relative performances. The first two features sets
fall into the category of traditional features.
The first feature set consisted of mel-frequency cepstral coefficients (MFCCs). Thirteen MFCC coeffi-
cients were extracted from each frame. Seven frames were stacked together to get a 91 dimensional feature
vector which is then reduced to a 10 dimensional feature vector using linear discriminant analysis (LDA) [J.
Duchene and S. Leclercq, ”An Optimal Transformation for Discriminant Principal Component Analysis,”
IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 10, No 6, November 1988.] This is a
common approach in speech recognition.
The second feature set consisted of fluctuation strength (ratio of variance to mean of signal envelop),
statistics from the amplitude histogram (width, symmetry and skewness) and subband volume, and volume
distribution ratio. These features were designed by Sourabh Ravindran.
First divide the signal into 4 subbands (180-880, , 1, 3)
Do the following for each subband
1. Fluctuation Strength: Extract the envelope of the whole 1 second signal and compute the ratio of
variance to the mean of the envelope
2. Symmetry, Skewness, Width: Divide the signal into 100 msec with 80 msec overlap and for each of
these frames compute the amplitude in dB and plot the amplitude histogram. From the histogram calculate
symmetry, skewness and width.
3. Volume and Volume distribution ratio: Divide the signal into 20 msec frames and compute the volume
and volume distribution ratio for each frame. Then take the average across all frames.
Thus there are 6 features per subband and 24 features in total; all 24 features are used for testing and
training. There is no SVD, although creating a frame stack and doing dimensionality reduction seems the
better way to go.
The third feature set consisted of computing the rate-scale-time-frequency representation of a signal
and collapsing the time dimension to obtain the rate-scale-frequency (RSF) representation. Singular value
decomposition was used to reduce the dimensionality of the feature set from a 5x23x128 cube to an 105-
dimensional vector. The features were extracted from the whole one second file.
Classification Structure:
For the first two feature sets a Gaussian mixture model (GMM) was used to train and test the audio samples.
A 6 component GMM was trained using the Netlab package for each class of acoustic data. Recognition was
performed by comparing the likelihood of a frame of test data against each of the four models. We chose the
model with the highest likelihood as the winner.
For the third feature set, a support vector machine (SVM) approach was used to perform the classifica-
tion. We used the SVM toolbox for Matlab package by Anton Schwaighofer [http://www.cis.tugraz.at/igi/aschwaig/software.html]
to perform the classification.
Results:
During the workshop, only the second set of features were tested using a GMM classifier. Twenty tests were
performed. For each test, each sound file was randomly assigned to either the training or the testing set, so
that the training set contained 80% of the data. Then each file was split into one second-long segments for
use in training and testing. The errors reported below were computed using these 1-second long segments.
49

Neuromorphic Engineering Workshop 2003
On twenty separate iterations of the training/testing procedure, the classifier performed with 93.1% correct
classification, with a variance of 1.45%. In one test, the error were distributed as follows (true class is listed):
Animals - 38 errors, Noise - 20 errors, Music - 27 errors, Speech - 11 errors. We suspect that the large number
of errors in the animal class were due to the varied data in this class. In one second chunks, much of the data
probably looked like noise. (We didn’t preserve the full confusion matrix.)
4.4
Sound Localization
Leader Andr´e van Schaik
Participants: Milutin Stanacevic
Andr´e van Schaik
Air-coupled acoustic MEMS offer exciting opportunities for a wide range of applications for robust
sound detection, analysis, and recognition in noisy environments. The most important advance these sensors
offer is the potential for fabricating and utilizing miniature, low-power, and intelligent sensor elements and
arrays. In particular, MEMS make it possible for the first time to conceive of applications which employ
arrays of interacting micro-sensors, creating in effect spatially distributed sensory fields. To achieve this
potential, however, it is essential that these sensors be coupled to signal conditioning and processing circuitry
that can tolerate their inherent noise and environmental sensitivity, without sacrificing the unique advantages
of compactness and efficiency.
Several laboratories, traditionally involved with the Telluride Neuromorphic Engineering Workshop, are
currently focusing their efforts on developing a smart microphone, suitable for outdoor acoustic surveil-
lance on robotic vehicles. This smart microphone will incorporate MEMS sensors for acoustic sensing and
adaptive noise-reduction circuitry. These intelligent and noise robust interface capabilities will enable a
new class of small, effective air-coupled surveillance sensors, which will be small enough to be mounted
on future robots and will consume less power than current systems. By including silicon cochlea based
detection, classification, and localization processing, these sensors can perform end-to-end acoustic surveil-
lance. The resulting smart microphone technology will be very power efficient, enabling a networked array
of autonomous sensors that can be deployed in the field.
We envision such a sensory processing system to be fully integrated with sophisticated capabilities be-
yond the passive sound reception of typical microphones. Smart MEMS sensors may possess a wide range
of intelligent capabilities depending on the specific application, e.g., they may simply extract and transmit
elementary acoustic features (sound loudness, pitch, or location), or learn and perform high-level decisions
and recognition. To achieve these goals, we aim to develop and utilize novel technologies that can perform
these functions robustly, inexpensively, and at extremely low power. An equally important issue is the for-
mulation of algorithms that are intrinsically matched to the characteristic strengths and weaknesses of this
technology. In this paper we present an implementation of one such algorithm, which is inspired by biology,
but adapted to the strengths and weaknesses of analog VLSI, for localizing sounds in the horizontal plane
using two MEMS microphones.
In the sound localisztion group we tested two implementations of neuromorphic sound localization sys-
tems. The first system, designed by Milutin Stanacevic and Gert Cauwenberghs at Johns Hopkins University,
is inspired by the Reichhardt Elementary Motion Detectors for visual motion detection in that it uses spatio-
temporal gradients. The second system, designed by Andr´e van Schaik at The University of Sydney, uses
50

Neuromorphic Engineering Workshop 2003
two matched silicon cochleae with 32 frequency bands each. The time difference between the arrival of the
signal at the two microphones is estimated within each frequency band.
Spatio-temporal gradient (by Milutin Stanacevic)
We designed a mixed-signal architecture for estimating the 3-D direction cosines of a broadband traveling
wave impinging on an array of four sensors. The direction of sound propagation can be inferred directly
from sensing spatial and temporal gradients of the wave signal on a sub-wavelength scale. This principle is
exploited in biology and implemented in biomimetic MEMS systems. The direction cosines of the source are
obtained through least-squares adaptation on the derivative signals. Least-squares adaptive cancellation of
common-mode leakthrough and correlated double sampling in finite-difference derivative estimation reduce
common-mode offsets and 1/f noise for increased differential sensitivity. The architecture was implemented
in 0.5 µ CMOS technology. Printed circuit board (PCB) containing the acoustic localizer chip, biasing
circuits, clock oscillator and microphone array was designed and used for localization experiments.
To quantify the performance of gradient flow bearing estimation, the experimental setup with one direc-
tional source in a reverberant room was used and it is shown in Figure 4.14. We used four omnidirectional
miniature microphones (Knowles IM-3268) as a sensor array. The effective distance between opposite mi-
crophones in the array is 17 mm. The sound source was bandlimited (Hz) Gaussian signal presented
through a loudspeaker. Sampling frequency of a system was 16 kHz. The distance between loudspeaker and
microphone array was approximately 2 m. The experiments were performed for various ranges of bearing
angles. By taking the arctan of 8-bit digital estimates of time delays outputted by the chip, the bearing angle
estimate is obtained. The data was presented for 10 seconds and 10 bearing estimates were obtained for each
angle. Figure 4.15 shows mean and standard deviation of estimators of bearing angles when loudspeaker
was moved from 30o to 80o in increments of 10o. Dimensions of the array disable precise positioning of
microphone array and loudspeaker. Calibration can be performed using simple geometric relations and bear-
ing angle estimates can be corrected for the errors in positioning. The estimated performance in a mild
reverberant environment is around 4o.
In the case of multiple sources, gradient flow converts the problem of separating unknown delayed mix-
tures of sources, from traveling waves impinging on an array of sensors, into a simpler problem of separating
unknown instantaneous mixtures of the time-differentiated sources, obtained by acquiring or computing
spatial and temporal derivatives on the array. The linear coefficients in the instantaneous mixture directly
represent the delays, which in turn determine the direction angles of the sources. This formulation is attrac-
tive, since it allows to separate and localize waves of broadband signals using standard tools of independent
component analysis (ICA), yielding the sources along with their direction angles. We implemented static
ICA algorithm in mixed-signal chip that interfaces with gradient flow localization chip into a system for
separation and localization of multiple sources. The initial work on building the system has been done.
Cochlea based ITD estimation (by Andr´e van Schaik)
Humans rely heavily on the Interaural Time Difference (ITD) for localization of sounds in the horizontal
plane. When a sound source is in-line with the axis through both ears, sound will reach the furthest ear with
a certain delay after reaching the closest ear. To a first approximation, ignoring diffraction effects around the
head, this time delay is equal to the distance between the ears divided by the speed of sound. On the other
hand, if the sound source is straight ahead or behind the listener, it will reach both ears at the same time. In
between, the ITD varies as the sine of the angle of incidence of the sound.
51


Neuromorphic Engineering Workshop 2003
Figure 4.14: Experimental setup
90
80
]
o 70
60
50
40
Estimated angle[
30
2020
30
40
50
60
70
80
90
Angle[o]
Figure 4.15: Mean value and variance of estimated angle when bearing angle is swept from 30o to 80o
52

Neuromorphic Engineering Workshop 2003
The most common method for determining the time difference between two signals in engineering is
to look for the delay at which there is a peak in the cross-correlation of the two signals. In biology, a
similar strategy is used, known as Jeffress’ coincidence model. In the ear sound captured by the ear-drum
is transmitted to the cochlea via the middle-ear bones. The fluid-filled cochlea is divided into two parts
with a flexible membrane, the basilar membrane, which has mechanical properties such that high-frequency
sound makes the start of the membrane vibrate most, whereas low-frequency sound vibrates the end most.
Inner Hair Cells on the basilar membrane transduce this vibration into a neural signal. For frequencies
below 1-2kHz, the spikes generated on the auditory nerve are phase-locked to the vibration of the basilar
membrane and therefore to the input signal. In Jeffress’ model, this phase-locking is used together with
neural delay-lines to extract the interaural time difference in each frequency band. Delayed spikes from one
ear are compared with the spikes from the other ear and coincidences are detected. The position along the
delay-line where the spikes coincide is a measure of the ITD. A hardware implementation of this model
has been developed by Lazarro. Such a hardware implementation needs the creation of delay lines with a
maximum delay value of the maximum time difference expected and a minimum delay value equal to the
resolution needed. This will have to be done at each cochlear output, which makes the model rather large.
An alternative approach uses the fact that a silicon cochlea itself not only functions as a cascade of filters,
but also as a delay line, since each filter adds a certain delay. Cross-correlation of the output of two cochleae,
one for each ear, will thus give us information about the ITD. However, the delays are proportional to the
inverse of the cut-off frequency of the filters and are therefore scaled exponentially. This makes obtaining an
actual ITD estimate from a silicon implementation of this algorithm rather tricky. Instead of the algorithms
discussed above, we have developed an algorithm that is adapted for aVLSI implementation. The algorithm
is illustrated in Figure 4.16. First, the left and right signals are digitized by detecting if they are above or
below zero. Next, the delay between the positive zero-crossing in both signals is detected and a pulse is
created with a width equal to this delay. Finally, a known constant current is integrated on a capacitor for the
duration of the pulse, so that the change in voltage is equal to the pulse width. A voltage proportional to the
average pulse width can be obtained by integrating over a fixed number of pulses. In our implementation,
separate pulses are created for a left-leading signal and for a right-leading signal. The left-leading pulses
increase the capacitor voltage, whereas the right-leading pulses decrease the capacitor voltage. Once a fixed
number of pulses has been counted, the capacitor voltage is read and reset to its initial value. Moreover,
the algorithm is not applied directly to the input signal, but pair-wise to the output of two cochlear models
containing 32 sections. Each cochlear section has a band-pass filter characteristic and the best-frequencies of
the 32 filters are scaled exponentially between 300Hz and 60Hz. Band-pass filtering increases the periodicity
of signals that the algorithm is applied to, which improves its performance.
The hardware implementation of our algorithm uses two identical silicon cochleae with 32 sections each.
At each of the 32 sections the output of both cochleae is used to create digital pulses that are as wide as the
time delay between the two signals. This time delay is measured within each section and averaged over all
active sections in order to obtain a global estimate. Inactive sections are sections that do not contain enough
signal during the period over which the ITD is estimated and are therefore not included in the global estimate.
The total size of this implementation is 5mm2 in a 0.5 µ process, with 75% of the circuit area devoted to
the implementation of the capacitors. If the circuit were to operate at sound frequencies that humans use for
ITD detection, the capacitor sizes could easily be reduced by a factor 3, cutting the total circuit size in half.
In Telluride, we developed a small board which uses an Atmega 163 chip to establish an RS232 interface
between a laptop and the localization IC. A USB to serial converter cable is used to connect to the board and
also provides power. The board has been tested using the output of the laptop’s soundcard as its input. This
setup is shown in Figure 4.17.
53


Neuromorphic Engineering Workshop 2003
Figure 4.16: The localization algorithm
Figure 4.17: The localization board
54

Neuromorphic Engineering Workshop 2003
60
40
20
0
Output [arbitrary units]
−20
−40
−60
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
−4
ITD [s]
x 10
Figure 4.18: Localization results
This board has been successfully tested in Telluride as shown in Figure 4.18, which shows the output of
the localizer chip captured on the laptop for a number of ITDs. For each ITD 30 estimates, one per second,
were made of the sound’s ITD. The figure shows the mean and standard deviation of the estimates.
4.5
AER EAR
Leader Shih-Chii Liu
Participants: Shih-Chii Liu
Andr´e van Schaik
We tested the newly fabricated AER cochlea chip. This chip contains two matched cochleas (the same as
on the sound localizer chip) and circuits to generate events from the cochlear outputs and also the arbitration
circuits. The circuits are functional but we had several problems. We had the incorrect output pads on the
scanned cochlear outputs so we were unable to monitor the cochlear outputs. Second, the gain of the filter
generating the currents to the event-generating circuits in each pixel was too high, so the pixel generates
events even when no inputs were presented to the cochlear. However the AER circuits seem to be functional.
We were able to tune the biases so that events are generated in response to the cochlea but the biases are very
sensitive. Figure 4.19 shows the AER events in response to 300 Hz sinewave inputs that were presented to
the first or the second cochlea. The errors discovered will be fixed in a future version of this chip.
55

Neuromorphic Engineering Workshop 2003
300 Hz sinusoid input to 1st cochlea
9
300 Hz sinusoid input to 2nd cochlea
10
8
POL
POL=0
7
8
6
6
5
4
REQ
4
REQ
3
2
2
1
0
0
(a) 0
0.5
1
1.5
2 (b)
0
0.5
1
1.5
2
Figure 4.19: AER outputs for: (a) input to first cochlea; (b) input to second cochlea.
56

Chapter 5
The Configurable Neuromorphic Systems
Project Group

Leader Kwabena Boahen
5.1
EAER
Leader Richard Reeve
Participants: Richard Reeve, Richard Blum
Following on from discussions last year about extensions to AER, a board was built earlier in the summer
on which to implement a trial version of a possible Extended Address Event Protocol. The protocol itself is
described in the 2002 Telluride Workshop report, but the intention of it was to allow multiple packet types
to be sent over the same wires in a principled fashion, whilst maximising the speed of the transmission of
spikes. The implementation also tested out a three wire bit-serial communication protocol, and a true serial
protocol piggybacking on the RS232 and USB protocols.
Bit serial
The bit serial protocol is also described in the 2002 Workshop report, and is an asynchronous protocol with
2 request lines (Request 0 and 1) and 1 acknowledge line. Basic communication was established over it, but
since it had to be implemented in software on the 16MHz microcontroller, it turned out to be very slow. In
order to use this in practice it would have to be implemented in hardware, for instance on an FPGA.
RS232 and USB
Fast bi-directional communication was established via serial and USB with a computer and packets were
routed successfully by the board.
Analogue I/O
The goal of the analogue section was to integrate the analogue inputs of the ATMEGA128 into the EAER
framework and use it to generate Extended Address Events, and also to use a digital-to-analogue converter
57

Neuromorphic Engineering Workshop 2003
Figure 5.1: Input (ADC) and output (DAC) waveforms, demonstrating the ability of the eaer analog protocol to track
a time-varying signal. The input signal is at 300 Hz.
(Texas Instruments DAC8534) as a target for address events.
As a demonstration, it was used to sample a time-varying waveform, and the readings are routed via
EAER to the digital-to-analogue converter. As shown in Fig 5.1, the dac output is recognizable as a copy of
the input, although there is significant latency in the tracking of the 300 Hz input sine wave — this is largely
due to the sampling time of the onboard adcs which contributes well over 90% of the latency.
Conclusions
The EAER protocol worked successfully in a very limited context, but needs to be speeded up by a factor
of 1000x to run as fast as current state of the art implementations. This is unsurprising however as it was
intended only as a proof of concept on a low speed microcontroller. The board and the libraries created for
it will however remain useful for small (probably robot based) projects in the future.
5.2
AER competitive network of Integrate–and–Fire neurons
Leader Elisabetta Chicca
Participants: Xiuxia Du
Giacomo Indiveri
Guy Rachmuth
Introduction
The goal of this project was to explore the behaviour of a competitive network of Integrate–and–Fire (I&F)
neurons implemented on an analog VLSI chip that we brought to the workshop. The network is composed
58

Neuromorphic Engineering Workshop 2003
of a ring of excitatory neurons that project their output to a global inhibitory neuron, which in turn inhibits
the ring. The excitatory neurons have first– and second–neighbours lateral connections in both directions.
The chip was implemented using standard CMOS technology, AMS 0.8 µm process and it contains
AER receiver and transmitter (arbiter) circuitry, 32 externally addressable leaky I&F neurons, 512 externally
addressable adaptive synapses (14 excitatory and 2 inhibitory per neuron), 192 internally connected synapses
(lateral excitation to nearest neighbours and second nearest neighbours, local excitation to inhibitory neuron
and global inhibition to excitatory neurons).
We used the PCI–AER Board developed by Vittorio Dante at ISS, Rome and tested in previous Telluride
workshops (see Telluride reports 2002 and 2001) to stimulate the transceiver chip and to monitor its activity
(see Fig. 5.2).
Software development for the PCI–AER board
We extended the work that Adrian Whatley is carrying out at the Institute of Neuroinformatic in Zurich,
developing a Linux driver for the PCI–AER board. We used a library of C functions (also under development)
based on this driver to write two routines to monitor the chip activity and to stimulate it, respectively. The
read monitor routine was used to read the spike times and addresses from the chip. The write sequencer
routine was used to send address events to the chip. The output of the read monitor was plotted in Matlab
as a raster plot. In order to study feedforward activity in the network, we wrote a Matlab routine to provide
input of poisson distributed spike trains that were sent to the chip via the write sequencer. The AER board
receives a vector of neuron addresses and time delays, and relays these to the neurons on the chip.
Competitive behaviour
A simple behaviour that can be simulated with this network is a competition of two neurons receiving two
different stimuli trains. Both stimuli were modeled as poisson spike trains with a set mean firing rate. The
two stimuli trains were set such that one had an average firing rate that was four times bigger than the other.
In the simplest case, all local synaptic connections were set to zero and the output activity of the neurons
simply reflected the input which they received, as expected.
Next, lateral excitatory connections between first neighbours were activated. The lateral excitation is a
global parameter, and was set to a low value that allowed the excitability of only limited neighbourhood of
the stimulated neurons. In this case, the neuron receiving stronger input was able to spread the activity to a
larger fraction of its neighbours.
In the third experiment, global inhibition was included by activating the excitatory synaptic connections
to the inhibitory neuron, and the inhibitory synapse to the excitatory neurons. When the inhibition value
is set to a low value, a soft winner–take–all behaviour emerges. The neuron with lower input activity, still
showed firing, but at a lower rate than without inhibition. To obtain a hard winner–take–all behaviour,
where the neuron that receives the strongest input suppresses the activity of all other neurons, one should
increase either the excitatory synaptic weight to the inhibitory neuron, or the inhibitory synaptic weight to
the excitatory neurons.
Traveling waves of activity
To test the read monitor function we set the bias voltages of the chip to produce traveling waves of activity.
This behaviour (analogous to some cortical activity such as theta rhythm) can be induced by injecting a DC
constant current to all the neurons, activating the first and second neighbour connections to only one side
59

Neuromorphic Engineering Workshop 2003
of the neurons, and the global inhibition. A raster plot of the activity of the network in this configuration is
shown in Fig. 5.3.
5.3
Constructing Spatiotemporal Filters with a Reconfigurable Neural Ar-
ray

Leader Ralph Etienne-Cummings
Participants R. Jacob Vogelstein
Ralf Philipp
Paul Merolla
Steven Kalik
Introduction
A method for constructing visual-motion detectors using oriented spatiotemporal neural filters has previously
been proposed and implemented in hardware for a single pixel [1]. Our group worked towards a large-scale
hardware implementation of this system using a model of the octopus retina [2] as the input element and a
reconfigurable array of integrate-and-fire neurons [3] as the processing element. We were able to construct
and verify the functionality of bandpass spatial filters using a recurrent network architecture, in addition
to proposing a number of neural circuits that could potentially perform bandpass temporal filtering. When
combined, these filters would emulate the processing of the medial temporal visual area (MT) in the primate
brain.
Theory of Spatiotemporal Feature Extraction
A simple 1-D spatiotemporal filter can be constructed by convolving the output of a bandpass spatial filter
with the output from a bandpass temporal filter. By varying the passband of each filter, one can construct an
element selective for a region of space on the ωx-ωt plane, where ωx is the spatial frequency and ωt is the
temporal frequency. Constant 1-D motion of a point can be represented as a line through the origin of this
plane with slope v0 = ωt/ωx, where v0 is the velocity of the point. As explained in [1],
Oriented spatiotemporal filters tuned to this velocity can be easily constructed using separable quadrature
pairs of spatial and temporal filters tuned to ωt0 and ωx0, respectively, where the preferred velocity
v0 = ωt0/ωx0. The π/2 phase relationship between the filters allows them to be combined such that
they cancel in opposite quadrants, leaving the desired unseparable oriented filter. [See Figure 5.4a,b]
Since each filter may be broadly tuned, it is possible to create large number of filters spanning the ωx-ωt
plane and use the population response to estimate the true spatiotemporal orientation of a moving object
(Fig. 5.5).
Mathematically, the oriented spatiotemporal filters illustrated in Figures 1 and 2 can be described by the
following equations [1], where the “e” subscript indicates an even filter and the “o” subscript indicates an
60




Neuromorphic Engineering Workshop 2003
PCI-AER
Board

Workstation
ALAVLSI1 Chip
+ Board

Figure 5.2: Schematic diagram of the setup. The PCI–AER Board is connected to a workstation via the PCI Bus. The
chip AER input and output are connected to the PCI–AER board.
61




Neuromorphic Engineering Workshop 2003
30
25
20
Neuron 15
10
5
00
1
2
3
4
5
6
Time (ms)
Figure 5.3: Raster plot of the activity of the network showing traveling waves.
(a)
(b)
Figure 5.4: (a) Even and (b) odd filters.
Figure 5.5: Population response from a filter array.
62


Neuromorphic Engineering Workshop 2003
Sender
address
Weight polarity
Receiver address
RAM
DATA
ADDRESS
POL
IN
OUT
IN
OUT
IFAT
Synapse
Weight
index
MCU
magnitude
PC board
(a)
(b)
Figure 5.6: (a) Pixel schematic for Octopus Retina. (b) System architecture for IFAT.
odd filter:
Righte(vx, x, t) = ge(t)ge(x) + go(t)go(x)
Righto(vx, x, t) = ge(t)go(x) + go(t)ge(x)
(5.1)
Lefte(vx, x, t) = ge(t)ge(x) − go(t)go(x)
Righto(vx, x, t) = ge(t)go(x) − go(t)ge(x)
The candidate functions g(t) and g(x) need only be matched, bandlimited filters with quadrature counter-
parts.
Hardware
Octopus Retina
Our input element was a silicon retina that models the input-output relationship of the octopus retina [2].
This aVLSI microchip combines photosensors with integrate-and-fire neurons whose discharge rate is pro-
portional to the light intensity (Figure 5.6a). An array of of 80 × 60 of these pixels was designed and
fabricated in a 0.6µm CMOS process. To communicate off-chip, the Octopus Retina uses Address-Event
Representation (AER), a communication scheme originally proposed 1994 [4] that has become a standard
for neuromorphic microchips. The Octopus Retina produces events at a mean rate of 200 kHz under uni-
form indoor light. Prior to the 2003 Telluride Neuromorphic Workshop, a PCB for the Octopus Retina was
fabricated and tested to ensure easy interoperability with other AER-based microchips.
Integrate-and-Fire Array Transceiver (IFAT)
We designed and tested neural circuits to implement spatial and temporal filters using a reconfigurable array
of integrate-and-fire neurons known as the IFAT [3]. The IFAT system was originally designed by David
Goldberg for rapid prototyping of large-scale neural circuits. It consists of an array of 1,024 integrate-and-
fire neurons, and instead of including hardwired connections on-chip, “virtual synapses” are implemented in
63

Neuromorphic Engineering Workshop 2003
Retina
Retina
gtonic
IFAT
IFAT
gleak
(a)
(b)
Figure 5.7: (a) Spatial filtering and (b) temporal filtering neural circuits.
a look-up table (LUT) stored in RAM (Figure 5.6b). Using a software interface, it is possible to program any
network topology, with inputs to and outputs from the network communicated over an AER bus. Although
events could be streamed in real-time from the Octopus Retina to the IFAT, we chose to process events
off-line so that we could filter the same input stream with multiple different neural architectures.
Methods
Visual Stimuli
Matlab code for generating visual stimuli for electrophysiology experiments was designed by Steven Ka-
lik prior to the 2003 Telluride Neuromorphic Workshop. We used this software and generated a series of
drifting sinusoidal gratings at various orientations, spatial frequencies, and temporal frequencies. The Octo-
pus Retina was placed approximately 12” inches from an LCD monitor displaying the drifting gratings and
1,000,000 events were collected. On average, this corresponded to a few seconds of the visual stimulus, and
the resulting file (containing a list of events generated by the Octopus Retina and their respective timestamps)
could be “played back” using a Matlab program that was developed at the Workshop for this purpose. In
order to accelerate the neural processing, the data collected from the Octopus Retina was downsampled by
a factor of 8 in the x-direction and 6 in the y-direction to form a 10 × 10 pixel array to be processed by the
IFAT.
Neural Circuits
We implemented spatial filters using a simple recurrent architecture illustrated in Figure 5.7a. Each pixel
in the downsampled retinal array projects to three neurons on the IFAT, making an excitatory connection on
the center neuron and two inhibitory connections on each of the two nearest neighbors. Additionally, each
neuron is given a strong excitatory self-feedback connection and one inhibitory connection to each neighbor.
The resulting network can perform sharp spatial filtering in any direction, depending on the orientation of
the inhibitory inputs from the retina and inhibitory outputs from the center neurons. Similar architectures
were designed for wider passbands by increasing the number of excitatory connections from the retina and
moving inhibitory connections further from the center.
Temporal filtering requires detecting changes in the spike frequency at individual pixels. To achieve this,
we designed the two-neuron circuit illustrated in Figure 5.7b. In this architecture, each retinal output projects
to two neurons on the IFAT and makes excitatory synapses onto each. The strength of the connection to the
64

Neuromorphic Engineering Workshop 2003
(a)
(b)
Figure 5.8: (a) Downsampled output from the Octopus Retina. (b) Spatially-filtered output from the IFAT.
first neuron is stronger than that to the second, but the second neuron has an inhibitory synapse onto the first.
The second neuron also has an excitatory feedback connection to itself. Finally, the first neuron receives a
tonic excitatory input while the second neuron has a constant leak current. If a retinal cell starts spiking after
a brief quiescent period, it will initially activate the first neuron, and subsequently activate the second neuron.
If it continues to spike, this input along with the second neuron’s positive feedback will cause the second
neuron to spike at a fast rate, consistently inhibiting the first neuron. However, if the retina is bursting, as it
would during a flashing or quickly moving stimulus, the initial spikes will be followed by a quiescent period
during which the first cell can charge up and the second cell can discharge. Thus, the optimal stimulus for
these temporal filters is one which consists of bursts of activity.
Results
One frame from the output of the Octopus Retina while viewing a vertical drifting grating is shown in
Figure 5.8a, along with the spatially-filtered version produced by the IFAT (Figure 5.8b). The resulting
image is much sharper than the input, demonstrating the efficacy of the neural spatial filters. We have also
created a series of movies using various spatial and temporal frequencies of the visual stimulus along with
multiple filtering architectures on the IFAT, but we have not yet quantified the output. Additionally, we have
tested the temporal filters but have not yet achieved consistent results. There are many parameters in the
temporal filter circuit, and we anticipate success after exploring this space more thoroughly.
Future Work
In the future, we would like to finish testing the various neural filter architectures and more thoroughly
characterize the response characteristics of each filter. Once that is complete and we have settled on the
optimal neural circuits, we will use all of the available neurons on the IFAT to process a 32 × 32 image.
The same output file from the Octopus Retina can then be processed by multiple spatial filters with different
orientations, and this output can be further processed by multiple temporal filters, simulating an array of
spatiotemporal filters as in Figure 5.5. By measuring the output from each of these filters and computing the
mean response (or some other statistic), a population code can be obtained. This final output should provide
65

Neuromorphic Engineering Workshop 2003
information about the true spatiotemporal frequency of the visual stimulus. We anticipate continuing this
work in the weeks and months following the 2003 Telluride Neuromorphic Workshop.
66

Bibliography
[1] R. Etienne-Cummings, J. Van der Spiegel, and P. Mueller, “Hardware implementation of a visual-motion
pixel using oriented spatiotemporal neural filters,” IEEE Transactions on Circuits and Systems II, vol. 46,
pp. 1121–1136, September 1999.
[2] E. Culurciello, R. Etienne-Cummings, and K. A. Boahen, “A biomorphic digital image sensor,” IEEE
Journal of Solid-State Circuits, vol. 38, pp. 281–294, February 2003.
[3] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, “Probabilistic synaptic weighting in a reconfig-
urable network of vlsi integrate-and-fire neurons,” Neural Networks, vol. 14, pp. 781–793, July 2001.
[4] M. Mahowald, An analog VLSI system for stereoscopic vision. Boston: Kluwer Academic Publishers,
1994.
5.4
An AER address remapper
Leader Kwabena Boahen
Participants: Paul Merolla Bernabe Linares-Barranco Teresa Serrano Gotarredona
We have built an address remapper that operates in real-time using a FPGA. The FPGA is able to detect
different cell-types and shift each type by a pre-specified amount. Although this particular remapper func-
tion is limited to translational shifts, we believe that the libraries developed for in this workgroup will be
invaluable for more complex AER processing (the files will be posted on [2]).
We tested the remapper using a retinomorphic chip [1] as the input, and a reciever chip as the output (the
reciever chip integrates incoming spike trains and displays them as analog values on a monitor in real-time).
Figure 1 shows a block diagram of the system.
The specifics of the remapper system are shown in figure 2. We first convert the word-serial AER output
of the retina into a dual-rail format. This conversion is important because typically, FPGAs are not designed
to implement asynchronous circuits. The dual-rail representation allows us to check the validity of the data
before processing it, which is crucuial because FPGAs can have considerable routing delays between logic
blocks. After the cell-type is identified by a flag, an appropriate offset is fed into a dual-rail adder, along
with the original address. Subsequently, the new address(es) are sent via a dual-rail to serial converter.
We explored two tasks to test our AER remapper. The first task was to differentiate between sustain plus
(s+), sustain minus (s-), transient plus (t+), and transient minus (t-) cell-types, and shift each cell-type by a
different amount. Each cell-type is orgainzed in the retina address space as shown in figure 3
Figure 4 shows the retinal response to a bright spot in a color coded representation (green represents
s+, red s-, yellow t+, blue t-). We can clearly see that the off-center response (red) and on-center response
67

Neuromorphic Engineering Workshop 2003
Figure 5.9: Block Diagram of the System
FPGA Circuit Architecture
li
ry
l_ack
_lj
_rx
Ser2DR
Ser2DR
lo
v_add
ack
v_lat
DR Ybus
mx/my
v_out
Y-Latch
Mux-Y
Output Data
Offset
C-Tree
Serial Data
Input (bundled)
DR Xbus
X-Latch
Cell-Type Detector
DR-Adder
Mux-Xadd
Input Serial Protocol
Internal FPGA DR protocol
Single Word Output
Serial Protocol
li
ry
Neutral
Valid
Neutral
_lj
_rx
Data
Data
Data
lo
v
ack
Row
Col
N Col
Row
Col
Addr.
Addr.
Addrs
Addr.
Addr.
Figure 5.10: Internal block diagram of FPGA
68

Neuromorphic Engineering Workshop 2003
Figure 5.11: Diagram of Organization of Addresses in the Retina
69


Neuromorphic Engineering Workshop 2003
Figure 5.12: Image of shifted cell types
(green) are displaced from each other (they should be exactly lined up, but because we have activated the
AER remapper, they are relatively shifted).
For the second task, we are going to look for a remapping that emphasizes vertical edges moving hori-
zontally from right to left. This kind of moving edge is going to cause simultaneous firing of macropixels
situated in the same horizontal row, and in the spatial sequence (from left to right): s+, t+, t-, s-. In order to
emphasize the simultaneous occurrence of this spatial sequence of events, we are going to map this spatial
sequence into the same address at the receiver. Whenever a spike comes out of the retina, we first have to
detect which kind of cell has fired to compute the appropriate remapped address. In order to do that, we
define four flag signals: sp, sm, tp, tm. Signal sp is ’1’ only when an output address corresponds to that of
a positive sustained cell. Signal sm is ’1’ only when the spiking address corresponds to that of a negative
sustained cell. The same is true for tp with the positive transient cells and for tm with the negative transient
cells. To detect which kind of cell is firing, we need only to look at the two least significant bits of the spiking
address. This can be done with the following combinational logic functions:
sp=(x(1) AND NOT(x(0)) AND NOT(y(0)) OR (x(1) AND NOT(x(0)) AND y(0))
sm=(NOT(x(1)) AND x(0) AND NOT(y(0)) OR (x(1) AND x(0) AND y(0))
tp=x(1) AND NOT(x(0)) AND y(1) AND NOT(y(0)) tm=NOT(x(1)) AND x(0) AND
NOT(y(1)) AND y(0)
where x(1),x(0) are the two least significant bits of the retina output X address, and y(1) , y(0) are the two
least significant bits of the retina output Y address. The remapped X address can then be computed as:
Xremap=(sp AND (Xs+3)) OR (tp AND (Xs+2)) OR (tm AND (Xs+1)) OR (tm AND Xs)
where Xs is a subsampled address which results from removing the two least significant bits from the in-
coming X address. Xs+3 is the result of adding 3 to the Xs address. Xs+2 is the result of adding 2 to the Xs
address. Xs+1 results from adding 1 to the Xs address. The remapped Y address is
Yremap=Ys
which is just the subsampled address which results from removing the two least significant bits from the
incoming Y address.
The motion remapping works as designed, but the alogorithm is flawed. Because the motion dependent
signal is carried by the transient signals, the motion response is swamped by sustain cells by a 4:1 ratio.
70

Neuromorphic Engineering Workshop 2003
Figure 5.13: Architecture Diagram
Therefore, we did not get any motion selectivity above the noise level. What we need for the future system
is a address-filter circuit that only maps an equal number of sustain and transient cells.
References:
[1] B E Shi and K A Boahen, Competitively Coupled Orientation Selective Cellular Neural Networks,
IEEE Transactions on Circuits and Systems I, vol 49, no 3, pp388-394, 2002.
[2] http://www.neuroengineering.upenn.edu/boahen/index.htm
5.5
Information Transfer in AER
Participants: Tobi Delbruck, Chiara Bartolozzi, Paschalis Veskos, Christy Rogers
Motivation
There are now several different silicon retina chips within the workshop. Due to lack of standardization,
each of them outputs spike data in a slightly different format. The information conveyed in the spikes may
also differ; output from a first generation chip is proportional to light intensity while the output from a newer
version is temporal and spatial derivatives of the stimulus. The aim of this project was to create a Matlab
tool that would allow easy analysis and direct comparison of data from a variety of retinas. Thus, the need
for modularity was clear from the start. Due to the very large size of the data files, a secondary aim was to
implement a fast and memory-efficient algorithm for data processing.
Reconstruction Modular Architecture
The main branch pursued a general, data- and retina- independent architecture that would allow data from
several sources to be processed. Easy expandability was another goal. The block diagram below shows the
structure designed. All of the block were implemented with the octopus retina and tested.
The ’Master’ module serves as the controller of the overall process. It reads in a user-supplied specifica-
tion file that contains the parameters of the operation, such as retina dimensions in pixels, data file format,
reconstruction kernel to be used etc. This data is then passed on to the relevant modules when they are
called. Next, the ’input’ module is called to read the data from the disk and store it in memory. Any prepro-
cessing, such as sorting, neuron cell differentiation, binning etc, is also performed here. The ’master’ calls
the ’kernel’ module to perform the actual image reconstruction. Data is then passed to the ’output’ module
that performs (any) post-processing such as brightness normalization. In the last step the ’graphing’ module
outputs the final reconstructed images/movie either to the screen or disk file.
Depending on the means by which the data was captured from the retina (particular data analyzer, PCI-
AER board etc) the relevant input module can be used. The same holds for retinas having different output
schemes or dimensions. This way code is reused: if for example data from a given retina is captured by two
different logic analyzers, only the specification file needs to be changed to dictate which input module needs
to be used to parse the different data files. The same holds for using different reconstruction kernels: the
same module can be used for different retinas with the same cells.
71

Neuromorphic Engineering Workshop 2003
A memory-efficient algorithm
The second branch implements a ’sliding window’ along the data file. In light intensity sensitive chips,
for each neuron that enters the window, its brightness value is incremented in the accumulator matrix and
for each one that is dropped from the window, it is decremented. This way the brightness of each pixel is
proportional to the number of spikes from its corresponding neuron. The memory footprint of the program is
very small, as only the data in the window is kept in memory, irrespective of the size of the data file. For the
new generation chips, which implement more sofisticated computation, the neuron brightness will change
accordingly to more complex functions (kernels).
The most important application of this structure will be the direct comparison of the performances of
different retinas: The output of each retina is formatted in the same type of structure and then will be
convolved with a kernel, in order to reconstruct the image shown. The kernel will reflect the computation
performed by each retina. Since the receptive field of one neuron can also represent the best stimuli for the
neuron itself, one way to reconstruct the image could be to convolve the spike train (caused by the input)
with the receptive field of the spiking neuron.
Future Work
The framework for a single interface between Matlab and each retina chip has been laid out. The remaining
portions that need to be implemented are input- and kernel- modules for chips other than the ”octopus.” With
a complete library of modules, for different data file formats and reconstruction kernels, a very large number
of retina experiments can be performed. Furthermore, the system is easily expandable with a minimum
amount of programming. For the second branch, the effect of different window types and exponential (or
otherwise) decay of old data could be explored.
72

Chapter 6
The Vision Chips Project Group
Project Leader Bertram Shi
The goal of the vision chips workgroup was to present a combination of lectures and projects to educate
attendees about the state of the art in neuromorphic VLSI vision chips and to give them hands on experience
working with them. Neuromorphic vision chips are designed both as vision systems for biomimetic robots,
and as real-time hardware models of biological visual processing. These chips mimic various aspects of
biological visual systems, such as phototransduction, logarithmic encoding, spatial and temporal filtering,
motion detection and depth sensitivity. The workgroup was organized by Bert Shi and Chuck Higgins. In
total, there were seven lectures and three projects.
The seven lectures were:
• Tobi Delbruck - Phototransduction in Silicon
• Kwabena Boahen - A retinomorphic chip with four ganglion-cell types
• Andre Van Schaik - Marble Madness: Designing the Logitech Trackball Chip
• Ralph Etienne-Cummings - Focal-Plane Image Processing Using Computation on Read-Out
• Ning Qian - A Physiological Model of Perceptual Learning in Orientation Discrimination
• Bernabe Linares-Barranco - EU Project CAVIAR on Multi-layer AER Vision
• Duane Edgington - Detection of Visual Events in Underwater Video Using a Neuromorphic Saliency-
based Attention System
The participants of the vision chips workgroup also completed three projects:
• Vergence Control Using a Multi-chip Stereo Disparity System
• Robot navigation via motion parallax
• A Serial to Parallel AER Converter
73

Neuromorphic Engineering Workshop 2003
6.1
Vergence Control with a Multi-chip Stereo Disparity System
Leader Bertram Shi (Hong Kong Univ. of Sci. and Tech.)
Participants: Ning Qian (Columbia Univ.)
Xiuxia Du (Washington Univ. in St. Louis )
Binocular disparity is defined as the difference between the two retinal projections of a given point in space.
It has long been recognized that the brain uses binocular disparity to estimate the relative depths of objects
in the world with respect to the fixation point, a process known as stereoscopic depth perception or stere-
opsis. The neurophysiological mechanisms of stereopsis have been studied extensively over the past several
decades. A specific model, known as the disparity energy model, has emerged from recent quantitative re-
ceptive field (RF) mapping studies on binocular V1 cells in cats and monkeys. According to this model, the
left and right RFs of a binocular simple cell can be well described by two gabor functions, one for each eye.
The stimulus disparity is encoded by either a relative positional shift or phase shift or both between the two
gabor RFs. The outputs of such simple cells are then combined in a specific way to generate complex cell
responses that are reliably tuned to stimulus disparity.
In addition to depth perception, binocular disparity can also drive vergence eye movements. Specifically,
a far disparity around the fovea triggers a divergence eye movement, while a near disparity around fovea
generates a convergence eye movement. In both cases, the eye movement is in the direction that cancels the
foveal disparity and maintains binocular alignment of the fixation point.
In this project, we used the outputs from a multi-chip disparity energy system to generate vergence
movements to keep two silicon retinae fixated on a target as it moved in depth. The silicon retinae we used
were developed by Kareem Zaghloul (a participant in a previous Telluride workshop) and Kwabena Boahen
at the University of Pennsylvania. This project was a natural outgrowth of two projects from the 2002
Neuromorphic Engineering Workshop. In the first project, Eric Tsang and Yoichi Miyawaki demonstrated
the feasibility of a neuromorphic implementation of the disparity energy computation by combining the
outputs of two 1D vision sensors with Gabor-like receptive field profiles. In the second project, Bert Shi and
Kwabena Boahen demonstrated a successful aVLSI implementation of an Address Event Representation
transceiver chip containing 2D retinotopic arrays of neurons with Gabor spatial RFs. We used this chip to
compute the monocular RF profiles.
To simplify the setup, we decided to fix one silicon retina and turn the other. We mounted the movable
retina on a Lego platform (Fig. 6.1) that we built at the workshop. The vertical rotational axis was approx-
imately aligned with the lens mounted in front of the retina. A servo motor was attached to the right edge
of the platform such that when the motor turned in one direction, the retina board turned in the opposite
direction. By mounting the motor offset from the vertical rotational axis of the platform, we could use the
full 8 bit resolution of the motor to control the vergence angle over a range of approximately 15 degrees.
The visual stimulus was a black vertical bar on a white background. We adjusted the optical axes of the
two retina boards so that the bar projected to the central units of both left and right gabor chips. When the
bar stimulus was then moved in depth along the optical axis of the fixed retina, its projection in the movable
retina would deviate from the fovea, thus generating a binocular disparity between the two retinae.
The vergence control system (Fig. 6.2) was based upon a system for computing disparity selective com-
plex cell responses developed by Eric Tsang at the Hong Kong University of Science and Technology after
his return from the 2002 Telluride workshop. The output of the two silcon retinae are fed into two Gabor
transceiver chips. The AER address filters select neurons on the left and right gabor chips whose spatial RF
74


Neuromorphic Engineering Workshop 2003
Figure 6.1: A. The vergence system setup. The two circles with crosses indicate axes of rotation. The servomotor is
located at the right hand axis. When it rotates clockwise, the vergence shifts from near to far. B. The LEGO platform
holding the movable retina.
Figure 6.2: Block diagram of the vergence control system
profiles are centered in the two retinae, These monlcular responses are combined to compute the output of
complex cells tuned to near and far disparities. Due to the limitations of the current system, the near and far
cell computations must be multiplexed in time.
Our goal was to use disparity energy responses to control the servo motor (and thus the retina rotation) to
keep the disparity between the left and right eyes at zero. To achieve this goal, we computed the difference
between the near and far cells’ responses (rnear − rfar). If the difference was positive, the stimulus disparity
should be near, and the servo motor was programmed to turn the retina inward (toward the fixed retina) by
a fixed small step; if the difference was negative, the retina was turned in the opposite direction by the same
small step. Through the continuous visual feedback, the movable retina could gradually eliminate the foveal
disparity and achieve binocular alignment of the bar stimulus on the two foveas. We demonstrated that this
simple control strategy worked well for relatively slow movement of the bar in depth. The speed of the
control system was limited by the simple control law used. We needed to keep the size of the step small to
maintain stability. However, this meant that the system responded slowly to large changes in disparity.
75

Neuromorphic Engineering Workshop 2003
Considering the fact that we only used two complex cells tuned to the same retinal location, the system
worked surprisingly well. However, there is much room for improvement.
First, the disparity-triggered vergence eye movement in humans and monkeys appears to have a fast,
open-loop component that does not require continuous visual feedback. To achieve the same behavior in
our system, we need to estimate not only the sign but also the magnitude of the stimulus disparity and move
the retina to cancel the entire disparity before processing new visual inputs. We tried to implement a crude
version of this strategy but did not have time to debug the code. Obviously, more complex cells tuned to a
retinal location would be needed in order to accurately estimate the stimulus disparity at that location.
Second, since we only considered complex cells of a fixed scale, the system was only sensitive to a
limited range of disparity. Consequently, when the stimulus was moved in depth too fast, the disparity would
be too large to be correctly detected, and the servo system would fail. This problem could be solved in a few
ways. (1) Improve the speed of the system with a better control law, e.g. one where the corrective movement
is proportional to the disparity error. (2) Include complex cells tuned to a range of spatial scales in order to
increase the range of detectable disparities. It may not be sufficient to simply use complex cells at a fixed
large scale because the precision of disparity estimation will be poor. (3) Allow the system to actively search
for a new binocular alignment after losing it.
Third, the current sysem can only track foveal stimulus. To enable tracking even when the stimulus is
off fovea, we obviously need to consider a 2D array of topographically arranged complex cells tuned to
different retinal locations. In the current implementation, where the complex cell responses are computed
sequentially, this will greatly slow down the motor response. A more promising method, which is currently
under development, remaps and merges the outputs of the left and right Gabor chips onto a single chip con-
taining a 2D array of squaring neurons, resulting in a 2D array of cells tuned to a single disparity. Retinotopic
arrays of neurons tuned to different disparities could be computed on separate chips.
Finally, it would be desirable to allow rotation of both retinae, and introduce more degrees of freedom for
each retina. This is necessary to accurately model biological eye movement systems, and will increase the
complexity of the motor control system tremendously. Within the current one-degree-of-freedom system, it
would also be interesting to explore how the vergence movement may help improve the accuracy of stimulus
depth estimation.
6.2
Motion Parallax Depth Derception on a Koala
Leader Chuck Higgins
Participants: Ning Qian (vision chip board)
Meihua Tai (vision chip board)
Ania Mitros (mechanical apparatus)
Peter Asaro (Atmel and Koala code)
Karl Pauwels (Atmel code)
Matthias Oster (general hardware debugging and Atmel code)
Richard Reeve was also absolutely essential to this project
In this project, we decided to mount a vision chip on a Koala robot to do obstacle avoidance (or its
inverse, target tracking). We computed relative depth from motion parallax. A visual motion chip (provided
76


Neuromorphic Engineering Workshop 2003
Figure 6.3: Motion parallax vision system mounted on a Koala robot.
by Chuck Higgins) was mounted on a servo-controlled four-bar mechanism which allowed it to be scanned
back and forth at variable speed.
An Atmel microcontroller board (designed by Richard Reeve) was attached to the Koala and used to
control both the servo position and the scanout of data from the vision chip. Software on the Atmel micro-
controller communicated via an ad-hoc digital bus with separate sofware on the Koala, issuing commands to
turn and move.
Given the available time, we accomplished quite impressive performance. The final robot was able to
follow an object placed in front of it quite precisely. There is anecdotal evidence that it may also have been
able to do this when multiple objects were in its depth perception range.
The primary limitation was noise on the analog chip injected from the clock of the Atmel microcontroller.
This limited the useful depth perception range of the robot to about three feet. We do believe that this noise
issue could be overcome, but the time was not there during the workshop.
A secondary issue was the sophistication of the software: especially in the case of no target in the depth
perception range, the Koala will still initiate a turn to a position caused completely by noise. This, too, is
obviously an enhancement that could be incorporated with a little time.
This project will be followed up in the Higgins laboratory, and may be brought to a future Telluride
workshop as a more advanced project.
6.3
A Serial to Parallel AER Converter
Leader Matthias Oster
Participants: Teresa Serrano-Gotarredona
Bernabe Linares-Barranco
77

Neuromorphic Engineering Workshop 2003
orientation
protocol
retina
filters
transformation
remapper
...
word serial
to
full parallel
~ 1.5k neurons
~ 8k neurons
1-n rerouting:
~ 4k adresses
~ 32k adresses
<n> MS/s spikerate
~ 240kS/s baseline
~ 1MS/s active
Figure 6.4: Example AER system with bandwidth estimation
The address-event representation protocol (AER) offers the possibility to easily connect different aVLSI
chips within one multi-chip system. Spikes from analog neurons are identified by the address of the neuron
and transmitted in a digital event to form large reconfigurable networks. However, to preserve the timing
properties of these spike trains, the demands on the bandwidth of the communication systems are high.
Several protocols have emerged to meet these requirements. The main current standards are the word-serial-
protocol, used by the groups of Kwabena Boahen and Bert Shi, and a full-parallel representation, mainly
used by the INI and other groups. To facilitate an exchange of aVLSI chips within the community and
enable the building of complex computational systems, we investigated into building a converter to translate
between the standards.
Bandwidth requirements
When looking at the bandwidth of current available AER aVLSI vision chips, one realises that the aVLSI
technology has left the area where interfacing with simple digital processing is possible, but requires current
state-of-the-art FPGA logic. Figure 6.4 shows an example system and lists the required bandwidth at each
building block. For a word-serial protocal running at 1MSpikes/s a handshaking frequency of 3MHz is
required, leaving about 300ns for each cycle. This time delay can only be met by modern FPGA design or
complicated discrete logic.
Communication Protocol
Fig.6.5 shows one handshaking cycle for the converter. Signals marked with i mark the word-serial protocol,
o the full-parallel representation. Arrows indicate the transitions the converter has to perform, dashed lines
the time points when the data is valid. On the input side, the chip id (c), the row address (y) and the column
address (x) are transmitted serially. This system is combined in one adress word on the output side (c+y+x).
Each data transfer is acknowledged to the sender.
We analyzed the handshaking to a state-transition-description, based on a formulation developed in Cal-
tech. This was transcribed to production rules in VHDL. Basically the project focused on discussions about
the different options for protocol transformations, remapping strategies, and potential hardware possibilities.
We learned about mapping all these concepts into VHDL statements and on how to implement them using
asynchronous circuit techniques, specially those based on dual-rail signal representations. We had a strong
78

Neuromorphic Engineering Workshop 2003
i_rc
/i_rxy
i_ack
i_data
c
y
x
/o_req
/o_ack
o_data
c+y+x
Figure 6.5: Converter Handshaking Cycle, see text for description.
interaction withthe CNS group (Configurable Neural Systems), who tutored us about programming FPGAs
using VHDL and exploiting asynchronous concepts with such tools.
79

Chapter 7
The Locomotion and Central Pattern
Generator Project Group

Leaders Avis Cohen, Ralph Etienne-Cummings, M. Anthony Lewis
In the past, neuromorphic engineering research has focused primarily on sensory information processing
and perception. The output pathways of the nervous system have largely been overlooked. More recently,
through the efforts of the group leaders, there has been a push to consider, model and mimic these output
pathways. Specifically, models of spinal cord mitigated locomotion, integrated with sensory processing,
vision, have been hosted in physical systems. The three leaders of the locomotion work group conduct
significant research in this area. In particular, Prof. Cohen studies the locomotion neural circuits in the
spinal cords of lampreys, Prof. Etienne-Cummings models these circuits with silicon chips while Dr. Lewis
uses both software and silicon models of these circuits to control robots. In this workgroup, we offered
various projects that cover these research interests of the group leaders and combine them with those of
the participants to conduct experiments that are not currently being pursued anywhere else in the country
and possibly world. We are comfortable making such a bold statement because we have congregated the
leading researchers in these areas, who have brought there equipment, and are now able to integrate systems
from various labs into a joint projects. The resulting systems cannot be replicated outside the Telluride
Neuromorphic Workshop because the individual components are never in one place at one time. The potential
for system integration and the resulting research that is done represents the unique strength of our Workshop.
We organized three main projects in this workgroup (a fourth project on Graziano cells was ”donated” to
the Multi-modal workgroup). The first project, titled ”Neural Control of Biped Locomotion”, uses a chip
containing silicon models of lamprey central pattern generation (CPG) circuits to control a robot with realistic
muscle organizations. The CPG chip was developed by Prof. Etienne-Cummings’ group at JHU, while
the robot belonged to Shane Migliore from GaTech. Ryan Kier, from U. Utah, developed the interface
circuitry, which was based on his existing work on PIC controlled servo motors. The interface circuits
allow the spiking outputs of the CPG chip to control the ”muscles” that actuate the hips of the robot. Using
sensory feedback from the robot, we developed an adaptive circuit to produce a smooth walking gait, while
responding/correcting perturbations. This project is interesting because it is the first time, to our knowledge,
that a fully self-contained silicon CPG model is used to control a robot with agonist/antagonist muscle
pairs. The second project is titled ”Analysis of a Quadruped Robot’s Gait Behavior”, and it investigated the
control of a robotic cat using software CPGs. The TomCat robot is a simple system with local digital motion
controllers. The gait patterns downloaded onto the robot, which executes the desired gaits. Experiments
80

Neuromorphic Engineering Workshop 2003
to quantify the gait patterns of the quadruped robot in terms of its maximum speed, and to investigate how
altering parameters such as hip swing and roll can affect the type of gait produced were conducted. The robot
was developed by Dr. Lewis at Iguana Robotics, Inc. Participants from U. Wisconsin (Elizabeth Felton),
ETH Zurich (Chiara Bartolozzi), UIUC (Peter Asaro) and U. Lueven (Karl Pauwels), Belgium, developed
various control patterns for the robot. This project was interesting because it shows that quadrupeds can be
easily controlled with simple gait patterns. Furthermore, it showed that the TomCat’s dynamics allows it to
move very fast, despite its relatively weak motors. The third project, titled ”Posture and Balance”, addresses
one of the toughness problems in legged locomotion. Given perfect control signals for the control of the
muscles of the legs, how does one make a freestanding robot that can use these control signals? To study this
problem, two robots were used, the Marilyn robot from Iguana Robotics, Inc., and the EyeBot, developed
by Paschalis Veskos, from Imperial College, London. In addition, participants from U. Edinburgh (Richard
Reeve) and U. Tokyo (Katsuyoshi Tsujita) also worked on the project. This project was mostly algorithmic
development, analysis, sensor (accelerometers and gyros) integration and software control. The results,
however, does provide some clues into how biologically inspired posture and balance could be realized.
This project is interesting because freestanding bipedal locomotion still remains the ”holy grail” of legged
robotics. The role of the cerebellum and other descending control signals to implement postural control in
biological organisms must still be studied and applied to bipedal robotics.
81

Neuromorphic Engineering Workshop 2003
7.1
Neural Control of Biped Locomotion
Participants Shane Migliore, Ralph Etienne-Cummings
This purpose of this project is to create a robot capable of producing stable walking/running movements
autonomously using a neural emulator for control and incorporating proprioceptive feedback.
Biological Background
The fundamental unit of the nervous system is the neuron-a vastly complex type of cell that can produce
voltage signals as a means of communicating with other neurons or muscles. Voltage changes are a result
of ions moving through the neuron’s cell membrane and changing the relative charge contained within the
membrane. If ion movement is sufficient to increase the voltage to a threshold level, a characteristic pattern
of ions move into and out of the cell to produce a voltage spike called an action potential (AP). The AP
causes a chemical signal-a neurotransmitter-to be released from the cell and alter then ion flow for nearby
neurons. This signal can be either excitatory (increasing the probability of nearby neurons producing their
own APs) or inhibitory (decreasing the probability of nearby neurons producing their own APs).
Central Pattern Generators (CPGs) are networks of neurons capable of producing rhythmic patterns in
the absence of sensory feedback. These networks serve as the generator for movements such as walking,
swimming, and breathing. The simplest CPG to create is the half-center oscillator-a pair of two neurons with
reciprocal inhibition. The natural firing pattern for this CPG is for one neuron to fire a burst of many action
potentials (and thus prevent the other neuron from producing any) and then enter a period without activity
while the other neuron fires action potentials. This behavior produces a two-phase oscillator that can control
alternations seen in joint flexion/extension. The output of the CPGs used in motor control are carried by
alpha motorneurons to muscles, which produces joint movements. Within the muscles and the tendons to
which they attach are stretch receptors and Golgi tendon organs that provide muscle length and muscle force
feedback, respectively. These sources of proprioceptive feedback modulate the controlling CPG to improve
stability as terrain or other environmental variables change. The effect of this feedback varies based on a
number of factors including phase of the CPG and control signals from other portions of the nervous system.
Engineering Background
The Robot The robot used in this project is a biped with six joints-hip, knee, and ankle for each leg-each
controlled by two servos with series elastic actuation. The use of antagonistic servos allows us to set both
the joint angle and stiffness (by introducing non-zero co- contraction). Sensory feedback was provided by
resistive potentiometers located on each joint, which output a voltage proportional to the joint angle.
The Neurons The half-center oscillator used was created using a silicon aVLSI chip that contains ten
integrate-and-fire neurons with binary outputs voltages. Each neuron can receive synaptic input from any
other neuron and up to four analog and four digital external feedback signals. The pertinent parameters that
can be adjusted are the threshold voltage, discharge rate, refractory period, and pulse width. The schematic
for a single neuron is shown in the Fig. .
The Interface
The spiking outputs of the neurons on the CPG chip were used to command the angular position of the
left and right hip joints on the robot. To accomplish this, a custom-built PIC microcontroller board was
82

Neuromorphic Engineering Workshop 2003
Neuron 1 An
An
Dig
Dig
1
4
1
4
Vout1
Vout10
...
...
...
Axon Hillock
(Hysteretic Comparator)
Internal bias Analog Inputs Digital Inputs
FB Signals
V
Weight (Exc)
Weight (Exc)
Weight (Exc)
Weight (Exc)
thresh
PW Control
Vout1
Axon
Weight (Inh)
Weight (Inh)
Weight (Inh)
Weight (Inh)
Cmem
Idis
Internal bias Analog Inputs Digital Inputs
FB Signals
Discharge
. . .
. . .
. . .
Irefrac
V
V
An
An
Dig1
Dig
out1
out10
1
4
4
Refrac Control
Figure 7.1: Schematic diagra of neuron circuit.
introduced between the outputs of the neurons and the servo motors on the robot. The spiking neuron
outputs from the CPG chip were low-pass filtered and converted into a continuous analog signal. The PIC
translated these analog signals into two pulse-width modulated (PWM) signals which were used to drive the
antagonistic servo motor pair at each hip joint. Fig. Xa shows how the servos drive each joint. The PIC
algorithm shown in Fig. Xb prevents co-contraction of the antagonistic pair. This was necessary because
the servo motors are position actuators and co- activation of the pair would generate large, unchecked forces
in the limbs. It can be seen from Fig. Xb that if both neurons command motion, the joint moves to a
position proportional to the difference of the two neuronal commands. The parameters a and correspond to
the maximum angular deflections during flexion and extension, respectively. The PIC remaps the maximum
output voltages from the two neurons, V1MAX and V2MAX, to a and .
The Experiment
The basis of this project was to create a single half-center oscillator from the silicon neuron models to
provide control signals to the robot’s hips. Ideally, we would have created the oscillator from two neurons,
as found in biology. However, the limitations of this type of neuron model required us to use a third neuron, a
pacemaker cell, to create the burst envelope need by the two output neurons. A schematic of this connectivity
is shown in the following figure. Note that the pacemaker cell has an excitatory connection-or synapse-with
one neuron and an inhibitory synapse with the other. The result is that when the pacemaker cell produces
a high voltage output, the first cell is excited-causing it to produce a burst of action potentials–while the
other cell is silenced. Likewise, when the pacemaker’s output voltage is low, the first cell is silenced and the
second cell produces a burst of action potentials. The result is that the two output cells produce alternating
bursts of action potentials 180o out of phase with each other.
To simplify the dynamics of the robot’s leg motion, we held the robot’s ankles rigid and removed any
stiffness from its knees. The incorporation of passive dynamics in the knees does not pose a significant
drawback because biped locomotion is not heavily dependent on knee actuation. We actuated the hips in an
alternating fashion-when one neuron fired, the left hip flexor and the right hip extensor were actuated and
vice versa. To set the appropriate agonist/antagonist servo output, we recorded the two continuous- time
neuron output voltages using our custom-designed circuit board. The board also measured the feedback
signals from the robot’s hip potentiometers and converted the voltages to levels appropriate to feedback to
83

Neuromorphic Engineering Workshop 2003
Left
Right
Left
Right
Hip
Hip
Hip
Hip
Flexor
Extensor
Extensor
Flexor
Left Hip Angle
Right Hip Angle
Extensor Flexor
Flexor
Servo

Neuron
Inputs

Flexor
V1
Bias
SGN( )
V2
Extensor
Bias

Scale
V1MAX
Extensor
Factors
MUX
Servo
V2MAX
(a)
(b)
84

Neuromorphic Engineering Workshop 2003
the silicon neuron chip.
When operated in a feed-forward configuration (no feedback), the system was able to produce the desired
alternating leg movements, but the movements contained abrupt velocity reversals, which caused the lower
legs to swing wildly. A plot of the neuron output and the corresponding hip angles is shown in the top left
plot of the following figure. The feedback added to the system was a modified form of the 1A reflex seen in
animals. Normally, this reflex provides positive feedback to muscles when they are lengthened. The reflex
we implemented not only provided positive feedback to muscles when they were lengthened (determined by
hip angle), they also provided negative feedback when the muscles were shortened. Therefore, when a hip
reached a maximally flexed angle, the neuron causing the flexion was inhibited and vice versa. This feedback
caused each burst of neuron spikes to slow in frequency as the burst progressed, causing a gradual slowing
of the leg and preventing abrupt velocity changes (bottom left plot in the following figure). The bottom right
plot demonstrates the neuron outputs when the legs were prevented from moving in the direction commanded
by one of the neurons (i.e. we prevented flexion of one leg and extension of the other). The result is that the
affected neuron does not receive negative feedback and therefore does not attenuate its firing frequency as
the burst continues. Note that the other neuron was not affected by this perturbation because the feedback
implemented is strictly ipsilateral.
Conclusions
We were successful in our attempt to create walking/running motion on a biped using antagonistic actuation
by controlling it with half-center oscillator emulated in silicon. As expected, feedback provided adaptive
control of the position of the legs and improved the smoothness of the gait.
85


Neuromorphic Engineering Workshop 2003
86


Neuromorphic Engineering Workshop 2003
Figure 7.2: The quadruped robot, ”Tom Cat”
7.2
Analysis of a Quadruped Robot’s Gait Behavior
Leader Tony Lewis
Participants: Peter Asaro, Chiara Bartolozzi, Elizabeth Felton, Karl Pauwels
Introduction
The aim of this project was to quantify the gait patterns of the quadruped robot in terms of its maximum speed
and to investigate how altering parameters such as hip swing and roll can affect the type of gait produced.
The information gained can be used in combination with visual information to guide robotic locomotion.
The Robot
The Quadruped Robot (aka Tom Cat) from Iguana Robotics Fig. 7.2 uses limit cycle based motion control.
In order to keep a stable oscillation it switches between the swinging and supporting phases. A shorter cyclic
period contributes to a smoother, rhythmic motion. The hip and knee joints are active while the ankle is
passive.
Robot Gait
We systematically studied the behavior of the robot by altering the following parameters:
• knee sweep amplitude
• hip sweep amplitude
• twist amplitude
• lunge
• phase difference between the legs
87

Neuromorphic Engineering Workshop 2003
• integration time dt
Each paw of the Tom Cat had a pressure sensor on it and this sensor data was collected during loco-
motion. The information was used to determine when each paw was on the ground during the movement
sequence. See Table 1 below and Figs. 7.3- 7.6.
Trial
dt
Knee Swing
Hip Swing
Twist
Lunge
B-F Phase
Gait
1
0.33
12
17.7
-7.6
0
1.32
Forward
Walk
2
0.50
10.2
0.6
-10.4
0
0
Fast Tapping
in Place
3
0.35
0
15
4.4
-8.4
0
Backwards
Sliding
4
0.11
2.4
28.5
-10
0
0
Slow
For-
ward Walk,
Long Stride
The footfall pattern for the forward walk (Fig. 7.3) is a sequence with the front left limb in phase with
the rear right limb and the front right in phase with the rear left. This is a typical walking pattern that is
observed in four legged animals such as cats and dogs.
The footfall pattern for the fast tapping in place (flamenco dancing) (Fig. 7.4) uses a similar sequence
to the forward walk, but the time the feet are on the ground is less and there is some ground contact time
overlap with the contra lateral limb (for example front left with front right). This is most likely due to the
higher speed and some amount of friction from the floor. The fast tapping pattern was produced by setting
the knee and hip swing amplitudes to a high value with very low twist amplitude.
The pattern for the backwards walk with sliding (moonwalk) (Fig. 7.5) shows a similar phase relationship
and overlap as the fast tapping in place. The very low amplitude for the knee swing and the large negative
lunge value contribute to the backwards sliding motion.
The slow walk gait (Fig. 7.6) is a result of the high amplitude hip swing which gives the robot’s limbs a
longer reach when moving forward.
Each of the parameters mentioned above was changed to quantify the different gait patterns and effects
upon the robot’s locomotion pattern.
The knee swing amplitude varies how much contribution the knee joint has in the gait pattern. It also
determines how high the entire leg will raise off the ground, which can impact the amount of friction during
locomotion. There is a knee swing threshold above which the robot will walk forward and below which the
robot will walk backwards.
The hip swing amplitude varies how much contribution the hip joint has in the gait pattern. Higher values
lead to a longer and potentially faster stride. With a value of zero the robot will tap in place because it does
not have any limb momentum with which to push it forward or backward.
The twist amplitude is the roll of the front of the body during walking that contributes to stabilization
and speed. If the twist is in phase with the supporting front limb during the walk, the robot will be less stable
due to more weight being on that side of the body. However, there will also be less friction during walking
because the limb will rise faster when the body rolls to the other side. If the twist is out of phase with the
supporting front limb, the robot will have increased stability since more weight will be distributed to the
swinging side of the body. There will, however, be more friction during locomotion due to less force being
placed on the supporting limb.
88



Neuromorphic Engineering Workshop 2003
Figure 7.3: Footfall pattern as a function of time while the quadruped walks forward.
Figure 7.4: Footfall pattern as a function of time while the quadruped quickly taps it’s paws while standing in one
place (Flamenco Dancing).
89



Neuromorphic Engineering Workshop 2003
Figure 7.5: Footfall pattern as a function of time while the quadruped walk/slides backwards (Moonwalk).
Figure 7.6: Footfall pattern as a function of time while the quadruped walks slowly forward.
90

Neuromorphic Engineering Workshop 2003
The integration time contributes to how quickly the robot moves during locomotion. Very low and very
high values tend to lead to instability.
Robot Speed
In addition to observing the gait patterns, the time it took the robot to move eight feet during a forward trot
was recorded. A good forward walking gait was found and the robot twist and time integration parameters
were varied while keeping the knee and hip swing constant (see table below. ) The top speed obtained was
1.714 robot body lengths / second. (One body length is 10 inches.) Changing the twist amplitude from a
large negative value of -10.4 to a smaller value of -0.8 created significant increases in the speed.
Trial
dt
Knee Swing
Hip Swing
Twist
Body Lengths / sec
1
0.40
16.8
19.4
-10.4
1.343
2
0.40
16.8
19.4
-5.2
1.171
3
0.40
16.8
19.4
-5.2
1.548
4
0.40
16.8
19.4
-5.0
1.627
5
0.40
16.8
19.4
-2.8
1.655
6
0.40
16.8
19.4
-2.8
1.655
7
0.40
16.8
19.4
-0.8
1.655
8
0.40
16.8
19.4
-0.8
1.714
91

Neuromorphic Engineering Workshop 2003
7.3
Posture and Balance
Leader Richard Reeve
Participants: Paschalis Veskos, Katsuyoshi Tsujita, Tony Lewis, Richard Reeve
Introduction
Locomotion is one of the basic functions of a mobile robot. Using legs is one of the strategies for accom-
plishing locomotion. It allows the robot to move on rough terrain. Consequently a considerable amount of
research has been done on motion control of legged robots. This study deals with the posture control of a
variety of bipedal robot.
In the future, a walking robot will be required which can carry out tasks in the real world, where the
geometric and kinematic conditions of the environment are not specially structured. A walking robot is
required to realize the real-time adaptability to a changing environment.
In order to overcome the problem, a considerable amount of research has been done in neurobiology, by
using CPG (Central Pattern Generator) models. This research has established steady and stable locomotion
for bipedal and other kinds of robot through the mutual entrainment of nonlinear oscillators, and good results
are obtained by numerical simulations and/or hardware experiments.
However, there are few studies on autonomously adaptive locomotion on irregular terrain. Recently, the
importance of the reflex/response controllers have been emphasised, which use a model of muscle stiffness,
which is generated by the stretch reflex and is variable according to the support or swinging stage.
This project studied posture control of bipedal robots, and was divided into three parts. The first was a
simulation study using nonlinear oscillators assigned to each leg; the second was the stabilisation of a robot
with muscle-like actuators and multiple sensory modalities, and the third was balance control of a slower
servo-controlled robot.
Simulation
The nominal trajectory of the leg is determined as a function of phase of its oscillator. The reflex controller
uses two types of feedback signals: one is from the contact sensors at the end of the leg; the other is from
the angular velocity sensor on the torso. The information from the contact sensors is used to control the joint
torque at the ankle to stablize the motion of the legs. On the other hand, the information from the angular
velocity sensor is used to control the hip joint to stabilize the motion of torso.
The performance of the proposed controller is verified by numerical simulations.
Controller
The architecture of the system is shown in Fig.7.7. The controller is composed of nonlinear oscillator net-
work and posture controller.
The nonlinear oscillator network and the body dynamics causes the mutual entrainment and generate
periodic motion of the legs.
The posture controller stabilizes the posture during the locomotion or static state. This controller uses
two kinds of sensor information. The one is from the contact sensors at the end of the leg. The other is from
the angular velocity sensor on the torso. The information from the contact sensors is used to control the joint
torque at the ankle by using compliance control, to stablize the motion of the legs. On the other hand, the
92

Neuromorphic Engineering Workshop 2003
information from the angular velocity sensor is used to control the hip joint to stabilize the motion of torso,
which is constructed as a PI (Proportional-Integral) controller.
First, I define the following variables:
θa : Relative joint angle at the ankle
ω0 : Measured angular velocity of the torso to the inertial space
λi : Measured reaction force(i = 1, · · · , m,
m : sensor number)
ri : Location of the force sensor on the back of the foot
τhF F : Input torque at the hip joint, generated by oscillator network
τh : Input torque at the hip joint
τaF F
: Input torque at the ankle joint, generated by oscillator network
τa : Input torque at the ankle joint
The posture controller is designed as follows:
τh = τhF F + KP ω0 + KI
ω0dt
(7.1)
δa =
λi × ri
(7.2)
τa = τaF F + Kaδa
(7.3)
where, KP ,KI and Ka are feedback gains. δa is the acting center of the reaction force.
Using this controller, and tuning the feedback gains appropriately, the hip joint is controlled to stabilize
the motion of the torso. And the ankle joint is controlled for the center of reaction force to converge within
the supporting polygon with compliance control.
Results
Using proposed controller, numerical simulation is implemented to verify the performance. Fig.7.8 is the
figure of the result. In this case, the robot is commanded to keep standing with one leg (the left leg). When
the right leg lifts up, the controller begins to stabilize the posture and establishes balancing without vibration
or divergence.
Marilyn
A similar controller to that described above was intended to be implemented on the bipedal robot Mari-
lyn, which has muscle like actuators which allow quasi-force control and control of stiffness through co-
contraction. Pressure sensors on the feet provided an estimate of the centre of pressure of the robot, and joint
angle sensors were also available, but the velocity sensors mentioned above were not available, so we had
to rely on acceleration sensors which proved too noisy to accurately integrate to estimate velocity, but they
could be used to estimate absolute angle from changes in the orientation of gravity. Calibration problems on
the robot platform also made it very difficult to control the robot, though this slowly improved through the
workshop. In the end it was possible to stabilise the robot against being turned over when its legs were being
held, but it was not possible to incorporate this with stance except by making the hip joints very stiff. Delays
in the sensory-motor control loop also made the robot very susceptible to oscillation.
93



Neuromorphic Engineering Workshop 2003
Figure 7.7: Architecture of the controller
Figure 7.8: Balancing posture on a flat ground with the right leg up
94

Neuromorphic Engineering Workshop 2003
A partial solution was found by stabilising the head, which could be done relatively quickly due to its
light weight, and then stabilising the body by comparing its angle to the head. This simplified the control
process, but not by enough to make the robot truly stable.
iBot
The ”Eyebot” platform we used to investigate 3D balance features a medium processing power on-board
CPU with integrated peripherals. Its actuators are position-controlled servomotors, of the type used for radio-
controlled models. The sensors we used for our experiments were a 2D accelerometer and a 1D rotational
velocity sensor (gyroscope). Both were solid-state MEMS devices manufactured by Analog Devices (ADI).
The strategy we followed was to start with a controller based on a simple heuristic and once we got it
working, build on that. The accelerometers were mounted on the torso, measuring accelerations along the
Anterior-Posterior and Medial-Lateral axes. Our first controller tried to keep acceleration at zero by moving
the torso in the opposite direction. To reduce complexity, only one leg was controlled; however, both hip
servos were actuated, to cater for Front-Back and Left-Right accelerations. The unactuated leg was removed
from the robot so as not to restrain motion. All servos not directly manipulated by the controller were held
stiff. As long as the foot is firmly held on the ground, the robot could keep the torso upright. Since this was its
only goal, it was perfectly happy to maintain the leg in a Z-like posture, consuming large amounts of current
and straining the motors. It was also slow in correcting errors; mechanical slop reduced the effect of small
corrections and hence a steady-state error would accumulate. When this became too large, the controller
would produce a large correcting action, causing overshoot. Lastly, servos would jitter excessively. In order
to improve on this behaviour, it was decided to also actuate the ankle servos. Due to their larger distance from
the torso, the ankle servos cause a much greater torque for the same angular displacement when compared
to their hip counterparts. Thus it was necessary to scale down their response by 75% to keep the system
stable. This system had a somewhat faster response, as fewer corrective steps were needed for the same
perturbation, but was very susceptible to oscillations. Motion would become very violent, as it would try
to correct for accelerations it caused itself. This was such an acute problem that one of the legs broke at
the ankle due to excess stress. The reason for this behaviour was that the speed at which motor commands
where issued was roughly equal to the resonant frequency of the system. Since the control loop could not
be made faster, the only solution was to make it slower by introducing explicit delays. We found 5/100s
was the minimum delay we could use and keep the system stable. Of course, this was at the expense of
system response. To reduce servo jitter, another heuristic was introduced: the controller would not make any
corrections unless the measured accelerations increased beyond a fixed threshold. This proved remarkably
effective, as a threshold as low as 0.8-1% was enough to avoid oscillations. The next step was to use both
legs for balance control. Thus both were actuated; front-back motion was controlled as before, but left-right
was to be by lengthening and shortening the corresponding legs. The gyroscope was also mounted such that
it measured rotations around the medial-lateral axis. Unfortunately, at the time of writing, this controller had
not fully been debugged and was unstable.
Conclusions
The simulation was, perhaps unsurprisingly, the most successful of the three balancing experiments. It was
disappointing that the biologically motivated Marilyn robot was very difficult to stabilise, but there was no
evidence that this was a result of the mechanism, but rather that much more work needs to be put into the
control loops, and particularly fast reflexes. With iBot, we better anticipated the problems of poor actuators
95

Neuromorphic Engineering Workshop 2003
and slow control loops, and so were better able to cope with them. As a result it succeeded in some limited
balancing behaviours.
96

Chapter 8
The Roving Robots Project Group
Project Leader Jorg Conradt
Richard Reeve
The roving robots workgroup is a very diverse collection of robot experiments carried out during the
Telluride workshop. Some participants join the group because they have never been exposed to robots before
and want to get a first grasp of what robots are and want to understand how they could be useful for their own
research. Other participants bring many years of expertise and already have very particular projects in mind.
To suit the workgroup to both extremes, we offered a variety of robots ranging from self-build kits (e.g.
Lego robots) to high-precission field-programmable robots (e.g. Koala and Khepera). We also suggested a
number of projects, but highly encouraged individual ideas and collaborations with other workgroups (e.g.
auditory, visual, online-learning, and multi-modality) to explore sensors in the real world rather then in
typical test-environments.
The following individual project reports will show the diversity in the field and report of the progress
that participants in the group achieved during the workshop.
8.1
Machine vision on a BeoBot
Leader: Laurent Itti
This project explored the applicability to robotics navigation of a computer architecture modeled af-
ter the main components of human vision. The group discussed three major components of human vision,
namely, volatile low-level visual pre-processing, rapid computation of scene gist/layout, and attentive vision
including localized object recognition. This project was mostly educational, and focused on a computational
analysis of these three aspects of primate vision. We started with a review of the properties of the compo-
nents, relying on experimental as well as modeling evidence to frame our study. We then studied how these
components could interact and yield a highly flexible vision system, where various navigation tasks could be
specified through different uses of those basic components (in contrast to having to develop new basic com-
ponents, e.g., a corridor model or a road model, for different tasks or environments). Finally, we studied the
implementation of such components on a mobile robot running a 4-CPU Linux system. In the short duration
of the project, we reached as a group a clearer view of the basic architectural components, constraints, and
97


Neuromorphic Engineering Workshop 2003
Figure 8.1: Photo of the blimp supporting the controller, sensors and motors
interactions among components, for the development of a robotics visual system modeled after three major
processes of human vision.
8.2
Blimp
Participants: Barbara Webb
Matthias Oster
Jorg Conradt
Mark Tilden
This project looked at the control of a flying robot blimp. In the long term the aim is to be able to stabilise
flight against perturbances and to do oriented flight based on sensory signals. In the short term we tried to
do some basic characterisation of the sensory capacity and motor response of the existing system as a basis
for future development.
The flying robot we used had been previously developed at INI by Jorg, utilising motion chips designed
by Alan Stanton. It can operate fully autonomously, carrying its own power supply, actuators, and on-
board control. A wireless interface connects the on-board microcontroller with a remote PC, which for
convenience reasons runs control software in MatLab during the time of software development. The on-
board microcontroller also reads analog signals from two aVLSI motion sensors attached on opposing sides
of the blimp, which each compute global optic flow in 2 dimensions. Additionally, the microcontroller
powers four motors with attached propellers at variable speeds, which allow the blimp to translate forwards
and backwards, rotate right- and left-bound, and lift or lower itself.
In testing this device, at basic level it was found that the balloon did provide sufficient lift to support
the controller board and motion sensors, plus a battery, see figure 8.1. The radio communications worked
reliably once we increased the antenna length. However there appeared to be motor interference in the
sensory signals. As a temporary solution we alternated motor power outputs and sensor recordings.
We modified a MatLab program written to remotely control the blimp and display the sensory output,
to collect data from the motion sensors during flight. Examples are given in figure 8.2. It quickly became
clear that slow drift of the blimp position was effectively undetectable. Although the response to rotation
was reliably in the correct direction, at lower speeds it barely exceeded threshold and only at relative high
98

Neuromorphic Engineering Workshop 2003
Example Motion Chip Output for different spinning directions
160
140
120
100
80
60
40
20
01
2
3
4
5
6
7
8
9
Figure 8.2: Response from the motion chips to rotation. The baseline response when not moving is shown as a
solid line, the output for left rotation as the dashed line and for right rotation as the dotted line. When rotating at
approximately 60 degrees/second, both sensors showed an appropriate directional response, but for one sensor (outer
lines) the amplitude was much larger than the other.
speeds provided clear information; it also differed substantially between the two chips . The response is also
- as would be expected - very dependent on the visual structure of the environment.
We also used some simple manouvres to characterise the motor response of the system. This differs
very substantially from wheeled robot, due to the large inertia of the system, resulting in substantial delays
between motor commands and observable responses. However there was reasonable consistency in the
response. For example, if we apply rotational thrust to the blimp for differing time periods, and find by
experiment the duration of reverse thrust needed to stop it, the result looks predictable (figure 8.3). This
raises the interesting challenge of using predictive mechanisms in the control strategy, a topic that is currently
of high interest in computational neuroscience.
We also considered the addition of a wind sensor. The idea was to adopt some version of the simple
switch mechanisms previously used by Mark Tilden. Mark built a sample sensor that uses light strips of
metal that when deflected by wind close a contact switch. The three strips are oriented in three different
planes to provide directional information. Using an oscilloscope, some differential response could be seen
but it was sufficiently noisy that we did not pursue the additional steps needed to interface it to the blimp.
Our future plan is to work with Chuck Higgins on interfacing chips better optimised for low velocities
to the blimp, developing algorithms to control the behaviour in response to visual signals, and to add wind
sensors to perform oriented behaviour.
8.3
Hero of Alexandria’s mobile robot
Leader Barbara Webb
Participants: Nici Schraudolph
99

Neuromorphic Engineering Workshop 2003
Duration of opposite thrust required to stop rotation
3
2.5
2
1.5
1
Duration of opposite thrust (seconds)
0.5
00
1
2
3
4
5
6
7
8
9
Duration of angular thrust (seconds)
Figure 8.3: Duration of reverse thrust needed to stop the blimp
Peter Asaro
Richard Reeve
Shana Mabari
Tony Lewis
and others...
Hero of Alexandria, in 60 A.D. described in substantial detail how to construct a mobile cart that (us-
ing only weights and ropes) could be programmed to move along an arbitrary path. This group tried to
reconstruct his design using Lego in one three-hour session.
The device is described in Hero’s “Peri automatopoietikes” (translated in Murphy, S. (1995) Heron of
Alexandria’s On Automaton-Making, History of Technology 17:1-44). It consists of a cart supported on three
wheels, one a castor wheel and the other two drive wheels. The wheels are powered by a descending weight.
By winding a rope connected to this weight around the wheel axles the cart drives forward. Reversing the
rope winding direction alters the direction of the robot. Hero describes how to wind the robot in various
patterns, including slack that results in pauses, to produce an arbitrary robot path.
We constructed a lego base approximate 14cm square, with a single castor wheel and two drive wheels
of 5cm diameter on a single axis. A tower approximately 40cm high was mounted on top of this. Originally
this was intended to contain a tube that slowly leaked sugar (Hero recommends using millet seed) from the
bottom to control the rate of descent of the weight. However it was found that this arrangement provided
very little torque, certainly not sufficient to overcome the inertia of the robot when its weight rested on the
wheels. A drawback here was also that the lego axles are not rigid and thus add substantially more friction
than a bettered engineered system would.
Instead we used the weight in free descent (the robot was ’powered’ by 8 AA batteries taped together).
A string was attached to the weight and ran over two pulleys then down to the main axle. We needed to
increase the axle size to increase torque, resulting in a total travel of 10 wheel rotations for the descent of the
weight over 30 cm. With this system we were able to successfully deploy the device on a smooth floor and
100

Neuromorphic Engineering Workshop 2003
demonstrate an automatic switch from forward to reverse motion.
If time had allowed, it might have been possible to de-couple the two drive axles, use two strings from the
weights to control them separately and thus create more complex motor patterns. More advanced behaviours
might include a switch/lever to detect collisions and alter the movement appropriately, e.g. by stopping one
weight and releasing another that drives a turn. However, such advances would probably require a better
base construction (i.e. not Lego). In summary, we showed that the basic principles described by Hero were
sufficient to construct a simple mobile ’robot’ using only mechanical principles. Adhering more closely to
his precise specifications for construction would probably produce a more efficient device.
8.4
Cricket phonotaxis in silicon
Leader Richard Reeve
Participants: Giacomo Indiveri
Barbara Webb
Introduction
The intention of this project was to combine together a hardware model of the cricket auditory system and
an aVLSI neural model of the early auditory processing in the cricket on a robot, in order to investigate
the complexity of implenting an entire sensorimotor model in hardware, as this would greatly aid the study
of these systems, allowing us to run experiments on more realistic (smaller) robots and in more realistic
environments (eg outdoors) where having a pc neural simulator becomes impractical.
The system consisted of four parts:
• A hardware model of cricket auditory system produces analogue signal proportional to pressure on
cricket tympana
• An aVLSI neuron chip modelling first four neurons in auditory processing of cricket (ON1 and AN1
neurons) and supporting PCB to interface to the auditory system
• A microcontroller to capture the spikes and pass them on as a control signal
• A khepera robot with the “cricket ears” mounted on them to steer towards the cricket song
The chip contains an analogue VLSI network of low-power leaky integrate and fire neurons, intercon-
neted by inhibitory and excitatory synaptic circuits. The chip die occupies an area of 2 × 2mm2 and was
implemented using a standard 1.5µm CMOS technology. Each neuron circuit has an adjustable absolute
refractory period setting, a spiking threshold voltage and a spike-frequency adaptation mechanism. The
synapses exhibit realistic dynamics and short-term depression properties.
To interface the chip to the ”cricket ears”, we constructed a PCB (Printed Circuit Board) containing a
log-amplifier able to rescale and shape the ear’s output signals. The reshaped signals were then connected
to two on-chip p-type MOSFETs that operate in the subthreshold regime and inject currents in the two input
silicon neurons. The firing rate of the input neurons is therefore linearly proportional to the ear’s outputs.
101


Neuromorphic Engineering Workshop 2003
Figure 8.4: The ’Hero robot’
102

Neuromorphic Engineering Workshop 2003
Ear inputs, AN outputs and turning behaviour
3
2
1
right ear
00
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
3
2
1
left ear
00
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
5
ANR
00
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
5
ANL
00
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
100
50
0
angle to sound
200
250
300
350
400
450
500
Figure 8.5: Signal, spiking response and robot tracking
Results
The results were promising, with the robot able to easily locate the cricket song being played from a nearby
loudspeaker. An example track is shown in figure 8.5. Cross inhibition results in only one AN1 (output)
neuron responding to the cricket song until the robot is almost oriented directly towards the speaker. The
response of one AN1 neuron to an ipsilateral song is shown in more detail in figure 8.6.
Conclusions
This has been a very productive project, proving that the technology is sufficiently mature to put the whole
neural model onto the robot, allowing us to benefit from the small size and low power consumption of the
silicon neurons to do more experiments further from the desktop in outdoor environments and on small
robots that better mimic the behaviour of real crickets.
This small subsection of the model is the most mature part and could consequently be hardwired into the
silicon, but investigation is still ongoing into higher circuitry, and consequently reconfigurable technologies
such as Address Event Representation will need to be used to allow implementation of the whole circuitry.
More experimentation will also have to be done with the adaptive circuitry that is available to see what is most
useful for coping with mismatch problems which have not previously come up in simulation. Nonetheless,
it is certainly our intention to continue our collaboration after Telluride.
8.5
Navigation with Lego Robots
Leader Chiara Bartolozzi
103

Neuromorphic Engineering Workshop 2003
2
1.5
1
ear input
0.5
0
4000
4100
4200
4300
4400
4500
4600
4700
4800
4900
5000
5
4
3
2
output spikes
1
0
4000
4100
4200
4300
4400
4500
4600
4700
4800
4900
5000
Figure 8.6: aVLSI neurons responding to cricket song
Participants: Chiara Bartolozzi
Introduction
The aim of this project was to use the aVLSI tracker chip implemented by Giacomo Indiveri for guiding
navigation of a Lego robot. The idea was to reproduce path integration observed in ants: They search food
walking around in the desert, when they have to go back to their nest they integrate the path and they go
straight instead of simply going along the previous path. To make path integration the ant needs an absolute
reference and they seem to use the polarity of desert sun light.
My idea was to implement this behaviour on the Lego robot, using as reference a visual cue. The robot
should follow a line till its end and then go straight back to the starting point.
This project seemed to be too ambitious for the simple Lego robot, therefore I thougth about a simpler
task that uses path integration. The robot should follow a line and when it finds an obstacle should produce
a random path toward one side of the obstacle and then it should use path integration to find again the line.
Lego robot
The Lego robot has a micro–controller that can receive inputs from three different sensors and output a
command to three different motors, it be programmed to process the sensory data with a given algorithm that
then produces the motor output. The program can be written in ’nqc’ (non–quite–c) on a workstation and
then can be downloaded to the micro–controller through an infrared port.
104

Neuromorphic Engineering Workshop 2003
At this point the robot is perfectly autonomous.
For this project I used two motor outputs, which can be controlled independently for control the left and
the right wheel.
As sensors I used a ’bump’, that detects collisions with obstacles, and a tracker chip, that gives an analog
value that encodes the position of the most salient visual input in an array of photosensors.
Implementation of the project
The first goal of the project was to implement an algorithm for tracking a line. At first I tried to implement
a sophisticated algorithm: on the base of the value given by the tracker chip I computed the duration of the
movement that the robot had to perform in order to maintain the line in the center of the array.
The algorithm was based on PID computation, which takes in account the actual value of the sensor, but
also the mean of the past values and a derivative of the last two.
It turns out that the robot is too simple and unreliable and that the tracker chip has too low resolution (it
has only 64 pixels), therefore the algorithm was too complicated and didn’t show good performances.
Then I had to go back and simplify the code, if found that the best solution is to use a closed loop
algorithm, where the robot self adjusts its movement continuously, using as signal the output of the tracker
low pass filtered, this gives a more smooth control for the path.
So far the robot can smoothly follow a black line, when it looses the line it turns back and starts again.
Future work
Due to lack of time I couldn’t implement the whole algorithm, but I will continue the project, trying to
achieve more complex behaviours with the robot.
8.6
Snake- and Worm-Robot
Participants: Jorg Conradt
Kerstin Preuschoff
In the WormBot project we aim to demonstrate elegant and robust robotic motion based on simple bio-
logically plausible design principles in a high degree-of-freedom (DOF) system. We investigate in motion
generated by multiple 1 DOF segments that are individually controlled by local Central Pattern Generators
(CPG), but achieve overall motion stability by short- and long-range coupling. A robot platform to evaluate
motion in such a system is not commercially available; therefore our research tasks in the course of this
project are two-fold:
Firstly the physical design (in hardware) of a robotic platform that consists of many individual segments.
Every segment provides one actuated DOF and allows interaction with all other segments on the robot. The
design needs to be simple, inexpensive, and flexible (eg. it shall allow easy reconfiguration and adjustable
mounting angles between consecutive segments).
Secondly the implementation (in software) of biologically plausible CPG algorithms which generate
elegant motion by interacting with other segments. As in biological systems, the CPGs shall be independent
of each other but receive input from local sensors and have adjustable short- and long-range connections to
105


Neuromorphic Engineering Workshop 2003
Figure 8.7: A detailed view of the WormBot segment
other segments’ CPGs. Also as in biology, a simple trigger signal from a master segment (the worm’s head)
shall be sufficient to set speed and direction of the robots motion.
After initial experiments during the 2002 Telluride Workshop on Neuromorphic Engineering, we evalu-
ated hardware components for the mechanical design of the robot and build prototypes using simple gearbox
motors that Mark Tilden provided (refer to 8.7). The link between consecutive segments (also shown in 8.7)
allows fast adjusting of the connection angle, such that the robot can be reconfigured for planar motion (as
e.g. in the lamprey) or for motion in 3D (as e.g. in worms). Additionally, new segments can easily exchange
broken or damaged segments if desired.
Each segment contains a small PCB with a re-programmable micro controller, sensors, and a commu-
nication interface. The sensors on the prototype robot are three light-sensors in orthogonal directions, a
temperature sensor and sensors for the segment’s internal states (rotary position, applied motor torque, and
voltage of power supply battery). A two-wire communication interface allows fast and flexible information
exchange between all segments. In the current setup, segments communicate all sensor readings and internal
states to all other segments, such that individual coupling can be adjusted in software (see below).
The microcontroller on each segment runs a CPG in software, controls the actuator, reads sensors and
communicates with other segments. Currently, all microcontrollers run an abstract mathematical CPG-based
algorithm, but we are planning to implement detailed biologically plausible CPGs in software. Currently the
coupling strength between segments is monotonically decreasing with distance; also this will change to an
arbitrarily complex re-configurable coupling function. Sensor readings (light as equivalent to pressure on
the robot’s skin, temperature, etc.) can have direct influence on the CPGs, such that simple behavior (e.g.
obstacle avoidance or light following) will be possible.
In the current setup, users can adjust all internal parameters (such as CPG phase offset, travelling speed,
etc.) with a GUI running on a remote PC, which communicates with the robot through a wireless link.
During the 2003 Telluride Workshop we have used two independent robots of 16 segments each, assem-
bled in two different configuration: The first robot was configured with 0-deg angles between neighboring
segments, which resulted in a snake like (2-D) motion. The other robot, in contrast, had 90-deg rotation be-
tween neighboring segments, which allows all odd-numbered segments to provide lift and all even numbered
segments to generate sideway motion.
With the robot in snake configuration, we spend most of the time tuning CPG parameters such that they
generate traveling waves along the whole body. We found several combinations in which the robot achieved
106

Neuromorphic Engineering Workshop 2003
velocity either along the bodyline or almost perpendicular to it’s body, resulting in side-winding motion. We
were ultimately able to ’steer’ the robot in any desired direction using only parameters found by guessing;
however, we are now looking to find sufficient examples of parameters to generalize from these parameters
to resulting motion.
Generating motion with the robot in worm configuration (i.e. with every other segment turned by 90 deg)
is much more complicated. We had to change the software significantly to support two independent traveling
waves (one for sideways and one for upward motion). The travelling waves still had to be synchronized
such that e.g. whenever one wave pushes outwards the other lifts the robot. Ultimately, all segments again
produced smooth motion with provided slight forward velocity.
Even though we achieved great progress during the workshop (e.g. the robot can now move in desired
directions), there are still many open research questions to investigate in. Coupling strengths e.g. are cur-
rently implemented as a simple scalar decaying over distance, whereas it was shown in e.g. lamprey that
coupling consists of short and sparse long-range connections. Also currently, the individual CPG are trig-
gered and influenced by the head and their neighbors only. In the long term, we would like to incorporate
sensor feedback (from the lightsensors and the motor torques) to adapt behavior.
8.7
Spatial representations from multiple sensory modalities
Participants: Reto Wyss
The aim of this project, which is a joint project between the roving robot and the multimodality work-
group, was to use the various sensors of a mobile Khepera robot in order to form a representation of the
environment, in which the robot was performing a random walk. There was a strong emphasis on actually
embedding the proposed system within the real-world, rather then relying on simulation. Furthermore, the
goal was to use online learning algorithms to allow the system to adapt to new environments as well as being
flexible with repspect to potentially changing environments.
Please find the project’s full report within the multimodality workgroup.
8.8
Biomorphic Pendulum
Participants: Shana Mabari
The Biomorphic Pendulum project was conceived by Mark Tilden, Tobais Delbruck and Shana Mabari.
The concept was to experiment with sensor dominated space. We designed a system exploring the relation-
ship between infrared LED’s, infrared sensors, drive motors, circuit boards, and recycled Biobug materials.
Two motors rest on top of the tetrahedron driven by a circuit board dictating the pendulum direction. An
LED hangs from the top center admitting infrared signals to four infrared sensors placed strategically at each
side of the square base. When an observer reaches to grasp the pendulum, part of the sensor detection is
interrupted pulling the pendulum ’away’ from the observer.
This project gave me an opportunity to explore the theory of reactive environments incorporating un-
familiar materials and techniques. I consider this project a prototype for larger and more complex future
installations. For example, a four meters high tetrahedron incorporating a similar sensor detection system.
Interaction with the ’environment’ would require the viewer to walk around the pendulum ultimately influ-
encing sensors and pendulum direction.
107


Neuromorphic Engineering Workshop 2003
Figure 8.8: Photo of the Biomorphic Pendulum Setup
108

Chapter 9
The Multimodality Project Group
Project Leaders Barbara Webb and Richard Reeve
Biological systems typically combine many sensory systems for the control of behaviour. Our under-
standing of how this is accomplished — smoothly and robustly — is still very limited. The purpose of this
workgroup was to discuss these issues and to attempt to implement several multimodal systems, using var-
ious neuromorphic sensors and robot systems. There was a strong overlap with several other workgroups,
particularly with Roving Robots, Locomotion, and Online Learning.
The participants had a wide range of interests, from problems of combining simple but potential con-
flicting sensory inputs on a robot, through issues of incorporating proprioceptive feedback and vision, to the
fusion of auditory and visual cues in speech perception. Nevertheless there was much common ground in
the nature of the problems encountered,
Summary of projects:
1. Spatial representations from multiple sensory modalities
This project combined several different sensory modalities in order to overcome potential ambiguities
encountered when a robot autonomously learns a mapping from sensory space to position in its envi-
ronment. A hierarchical neural network was optimized such that the cells at the various levels of the
network become sparse encoders of different single or multi-modal features of the environment. A
Khepera robot equipped with proximity sensors, light sensors and a 3x3 colour visual sense performed
a random walk in an arena. The connections within the network were adapted online, under the con-
straint that cells within a group become maximally decorrelated, using standard gradient descent. After
learning, the position and orientation of the robot were tracked with a camera to determine the place
of maximum response for each cell in the highest layer. The results showed the development of selec-
tivity of these cells to certain positions and orientations of the robot in its environment, analogous to
the place cells found in rats.
2. Mapping auditory to visual space
3. Fusing of vision and proprioception
4. Balance and posture in biped robots (full report under locomotion).
Some general issues that emerged from the projects were the following:
109

Neuromorphic Engineering Workshop 2003
• How can we deal with differential delays in different sensory systems? For example can the resulting
tendency to oscillation be overcome by simple damping?
• Should some sensory inputs be abandoned if they seem to conflict too much with others? Is there a
general mechanism by which this evaluation may be made, rather than using rules-of-thumb?
• Complex, potentially unreliable, mechanical systems become very hard to predict.
• How can merging or correspondance be handled for sensory fields that only partially overlap (e.g. a
limited visual angle vs. 360 auditory)?
• How is the spatial representation of an environment, based on merging sensory cues, biased by the
varying saliency of different stimuli in the environment? Is this potentially a feature, for landmark
extraction?
One issue discussed in greater detail was mechanisms for prediction as a means of achieving multimodal-
ity. For example, the effects of one modality on another might be predictable by the nervous system, and
an internal ’efferent’ copy be used to cancel these effects. Although this idea has been around for many
years, it is not so straightforward to see how it is implemented in real nervous systems. One problem is the
implication that a complete internal model of the effectors, environment and sensors is needed for accurate
transformation of a motor output signal to a sensory input signal. Tony discussed how in many cases this
problem might be reduced to fewer dimensions, by using a higher level motor command transformed to a
higher level sensory representation. Avis noted that CPG controllers typically have pathways connecting the
outputs both directly to sensory pathways (modulating the input) and to the cerebellum, but that there was
little biological data on what interactions actually take place. Sven reported the use of a predictive loop in a
’soccer robot’ to overcome the difficulty of a 150ms delay in the sensory feedback. They used the known ge-
ometric mapping from motor to visual space, but also used adaptive mechanisms to refine the predictor. This
raises the issue of whether such predictors can be fully learnt from an arbitrary starting point. Steven gave
several examples from visual neuroscience, in which activation shifts can be seen in advance of saccades.
Matt raised the example of electric fish where the predictive neural signals have been explicitly mapped. A
final issue raised was the issue of learning and eligibility traces: more generally, how the time course of a
predictive signal also needs to be controlled appropriately.
A further discussion was held about the ISO-learning algorithm proposed by Porr and Worgotter. This
algorithm for learning temporal relations has a number of nice properties and may be one appropriate ap-
proach to the issues raised above. However it also seems very plausible that a variety of mechanisms, rather
than a single principle, will be needed to solve the issue of multimodal integration.
9.1
Spatial representations from multiple sensory modalities
Project Leader Reto Wyss
Aims of the project
Any autonomous mobile agent engaged in a navigation task needs to be able to acquire an appropriate rep-
resentation of its environment. This is considered a hard problem, because an autonomous system only has
access to local sensory information and can not rely on any type of global positioning system. Often, how-
ever, individual local sensors exhibit singularities, i.e. very similar sensations can be elicited from completely
110


Neuromorphic Engineering Workshop 2003
contrast
RGB camera
red−green
visual
3x3 pixels
blue−yellow
MAP
proximity
8 IR sensors
non−visual
ambient
Figure 9.1: The network. The Khepera robot provides two basic sensors, i.e. an RGB camera mounted on its top and
the eight IR sensors arranged around the robot. The five resulting sensory channels, contrast, red-green, blue-yellow,
proximity and ambient light feed into an hierarchical network, within which sensory information is gradually combined
while moving up the hierarchy. The top most cells combine inputs from all sensory channels.
different positions within an environment. The first aim of this project was, to combine different kinds of
sensory modalities in order to overcome such sensory ambiguities allowing for a well defined mapping from
the sensory space to the position of the agent within its environment.
The second goal of this project was to implement a system which allows the autonomous and unsuper-
vised acquisition of such a representation of the environment. Furthermore, such a learning system should
incorporate as little a priori knowledge or assumptions about the type of environment it will be confronted
with as possible. This allows for high flexibility such that the system can cope with a variety of different
environments.
A hierarchical neural network was studied which receives its input from different sensors mounted on a
mobile robot performing a random walk within an environment. The network is optimized such that the cells
at the various levels of the network become sparse encoders of different single or multi-modal features of the
environment. Furthermore, the cells are trained such that they are minimally correlated with other cells. The
hypothesis was, that at the highest level of the hierarchy, cells will emerge, which are selective for different
places, and therefore represent the robot’s position within the environment.
Methods
For the experiments we used the mobile robot Khepera (K-Team, Lausanne, Switzerland, see Fig 9.1, left)
which was equipped with a standard CCD camera mounted on its top. The RGB image provided by this
camera is down sampled to a resolution of 3 × 3 pixels and split into three different channels representing
the luminance, red-green and blue-yellow contrast. Further sensory modalities are provided by the eight IR
sensors located around the robot’s body which have two modes of operation. In the active mode, these sensors
emit infrared light and provide an estimate of the distance of nearby objects depending on the intensity of
the reflected light. In the passive mode, the sensors measure the amount of ambient light which may vary
depending on the robot’s orientation with respect to different light sources.
The robot is placed in an environment which consists of an arena whose borders have a height of ap-
proximately 2cm. Thus, while not being able to cross these borders, the robot is able to perceive the world
111

Neuromorphic Engineering Workshop 2003
outside of the arena. During the experiment, the robot is randomly exploring the environment while avoiding
obstacles, i.e. the borders of the arena. Around the arena, different objects in different colors have been
placed in order to make the robot’s environment more interesting in terms of visual stimulation. In addition,
a light-source is placed outside the lower-right corner of the arena (see Fig. 9.2).
The learning system consists of a hierarchical multi-layer network (Fig. 9.1). Starting at the lowest level,
cells receive input from a single sensory channel. Moving up the hierarchy, the receptive fields of the cells
extend to become multi-modal. At the top layer, all the cells combine information originating from all the
five different sensory channels. The goal of the learning system is, to adapt the weights of the connections
between the different layers such that the following criteria are fulfilled: 1) the activity of each individual
cell varies smoothly over time 2) different cells within a group are maximally decorrelated. This criteria can
be described by the following objective function, given Ai, the activity of cell i within a group of cells:
( dAi )2
2
O = −
dt
t −
corr
(9.1)
var
t(Ai, Aj )
t(Ai)
i
i=j
where vart(Ai) is the variance of cell i and corrt(Ai, Aj) is the correlation between cells i and j, both over
time. The two different components in this formula account for the criteria listed above respectively. The
learning algorithm consists of using standard gradient ascent in order to maximize eq. 9.1. Furthermore, the
system is learning online, i.e. the different statistical measures used in eq. 9.1 are also computed online.
Results
At the beginning of an experiment, all the weights of the network are initialized randomly. The robot is
placed in the environment in which it runs around and learns the receptive fields of the different cells. For
the sake of simplicity, learning is not engaged in all the layers simultaneously, but rather following a schedule
which is given by the hierarchical structure of the network. Thus, initially, cell-groups within the first layer
learn their receptive fields on the sensory inputs. It’s not until the learning process has come to a stable state,
i.e. the objective function O does no longer change over time, that learning is engaged in the subsequent
layer. The whole learning process lasted approximately 3 hours.
In order to locate the receptive fields of the cells in the highest layer of the network within the envi-
ronment, the position of the robot as well as its orientation are tracked with a camera. After learning, the
activity of the cells at the highest level of the network are recorded over time. Both, the position as well as
the orientation at which the cells respond maximally are determined and plotted in Fig. 9.2. Most of the cells
are clustered around a region in the lower left corner of the environment. The reason for this accumulation is
not clear, only that it is concordant with the position of the light source which was placed outside the arena
at this corner. Furthermore, the rest of the cells appear to have a tendency to distribute along the borders of
the environment, while there are only a few located in the center.
The typical selectivity of the cells with respect to the robot’s position and orientation is shown in Fig. 9.3a
& b. The response of the cell shown in Fig.
9.3a is particularly orientation specific while it is not very
selective for the position of the robot. This cell has a strong affinity for the large red object in the upper-left
corner of the environment (see Fig. 9.2), i.e. the cell only responds strongly if this object is within its visual
field. The other cell, however, is more specialized for a particular position of the robot while being more
tolerant with respect to its orientation (Fig. 9.3b).
112


Neuromorphic Engineering Workshop 2003
Figure 9.2: Distribution of receptive fields within the environment. Each blue spot corresponds to the position at
which a cell from the highest level of the network responds maximally. The white lines attached to each spot specify
the preferred orientation of a cell, i.e. the orientation of the robot at which the cell has a maximal response.
113

















Neuromorphic Engineering Workshop 2003
a
b
Figure 9.3: Receptive fields of two sample cells for eight different orientations within the environment. The arrange-
ment of the single images within a circle is in accordance with the orientation of the robot. The dark regions correspond
to stronger responses, while the eight plots within a panel are normalized to the maximal response.
114

Neuromorphic Engineering Workshop 2003
Discussion
In this project we have built an autonomous system which is capable to form representations of its envi-
ronment based on purely local sensory information. The learning process is performed online and there
are a minimum of a priori assumptions about the environments built in. This will allow for the generaliza-
tion to different types of environments as well as optimal flexibility with respect to dynamically changing
environments.
The cells at the top level of the network developed selectivity with respect to the position of the robot.
In addition, however, all of the cells are to some degree also orientation selective. Thus, this cells may not
be compared to the position selective cells found in rat hippocampus, called place-cells, which are omni-
directional. Rats, however, have a very large field of view (≈ 320◦) as opposed to our robot whose camera
has a field of view of 60◦ only. This could be a potential explanation for this difference regarding orientation
selectivity.
The receptive fields of most of the cells developed in the proposed network prefer to arrange along the
border of the arena, and even more prominently at the lower-right corner of the environment. The latter
location corresponds to the location of the light-source placed in the environment. Thus, it appears, that
the cells prefer locations at which most of the sensory modalities are strongly stimulated, i.e. the proximity
sensors at the borders or the ambient sensors near the light source. A detailed analysis, however, will need
to be performed in order to elucidate the underlying mechanisms responsible for this phenomenon.
9.2
The cricket’s ears on a barn owl’s head, can it still see?
Project Leader Kerstin Preuschoff
Participants: Kerstin Preuschoff
Barbara Webb
Motivation and Goal
Barn owls are known to have a highly evolved capacity for sound localization. Sound cues such as interaural
timing differences (ITD) and interaural level differences (ILD) are highly variable across individuals due to
differences in the shape and size of head. It is therefore not surprising that the auditory system uses other
sensory cues to calibrate itself. Inspired by the barn owls ability to not only form closely aligned visual and
auditory spatial maps we tried to get a Khepera robot form spatial maps of its environment in a similar way.
Experiments
The experimental setup consisted of a Khepera robot equipped with a sound localization system which
measured the interaural time difference (ITD). In addition a vision chip was added to allow visual target
localization. The visual system sent back the location of the target within the (limited) visual field. Based
on the auditory cues the robot would turn towards the sound location until a close to zero ITD indicated
that the robot was facing the sound source. A neural network received as input both the ITD as well as the
target location within the visual field. The mismatch between the auditory and visual location was used as an
instruction signal to modify the weights within the neural network such as to improve the sound localization
115


Neuromorphic Engineering Workshop 2003
Figure 9.4: Khepera robot with cricket ears and vision chip
location. In a second experiment we wanted to simulate the barn owls ability to realign its visual and auditory
spatial maps after the visual field is shifted by a fixed angle.
Results
The Khepera robot could correctly localize a sound location in space based on its interaural cues. However,
the camera used to visually localize the target turned out to have too small a visual field to discover a target
that was mislocated by more then 10 deg. This was too small to create an instructive signal proportional
to the absolut displacement. However, within that small range the neural network correctly detected the
direction of misalignment and adjusted its weights accordingly.
116

Neuromorphic Engineering Workshop 2003
9.3
Fusion of Vision and Proprioception
Leader M. Tony Lewis
Participants: Avis Cohen
Barbara Webb
Chiara Bartolozzi
Elizabeth Felton
Karl Pauwels
Peter Asaro
Introduction
This project aimed to find a biologically plausible way to integrate vision and proprioception. This combined
information is useful in guiding robotic locomotion. As a specific application of this we developed a system
that teaches a robotic cat to retract its paw based on tactile and visual information.
Biological Motivation
The starting point of our project was the existance for multimodal cells, that respond both to visual stimuli
and tactile or proprioceptive signals. The chosen approach was inspired by the work of Graziano [1],
who showed evidence of the existance of parietal cells that respond to visual stimuli located in proximity
of their tactile receptive field. The visual receptive field of these cells is anchored to a particular part of
the body. This implies that the brain somehow translates the retinotopic information, originating from the
primary visual cortex, into a “part-of-the-body” centered response. In our case such cells, centered on
the foot, can be used to predict a collision with an obstacle and trigger a response of the Central Pattern
Generator. In parietal cortex there is also evidence [2] for cells that respond strongly to combinations of
visual perception and proprioception of the arm. Together these two types of cells suggest that the brain
constructs a coherent response to stimuli represented in different coordinate systems. The proprioceptive
signal, in our case represented by the joint angles (hip and knee), gives a reference for determining the actual
position of the foot in the world-centered space, and is used to link the foot position to objects in the visual
field.
Model
Pouget & Sejnowski [3] proposed a model based on a linear combination of basis functions for learning
head-centered receptive fields. The two dimensional Gaussian visual receptive field is modulated by a sig-
moidal gain function, that encodes for the head position. They showed that the resulting function can act
as a basis function. Given a certain number of such basis functions the network can learn all the possible
head-centered receptive fields. This approach is similar to radial basis function networks. An interesting
aspect of this architecture is that additional modalities can be integrated in the same manner.
We performed a simulation study on a version of this model, adapted to our purposes. In our case the
visual receptive fields are the same, but we have two proprioceptive signals: the hip and joint angles. The
goal of the model is to learn the response of a foot-centered neuron, with a Gaussian response depending on
the distance between an object and the foot. The network requires to learn the mapping from joint angels to
117

Neuromorphic Engineering Workshop 2003
(A)
(B)
1
1
0.8
0.8
0.6
0.6
0.4
0.4
network output
network output
0.2
0.2
0
0
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
target response
target response
Figure 9.5: Target response (desired output) as opposed to network output for training (A) and test (B) dataset
foot position, which can be geometrically described as follows. Given the xF coordinate of the foot, that is
constant for a robot walking along a straight line, the position of the foot (yF , zF ) can be derived from the
knee kF and hip angles hF :
yF
= −u sin(hF ) + l sin(hF − kF )
(9.2)
zF
= u cos(hF ) − lcos(hF − kF ) .
(9.3)
Simulation Results
The following simulation was performed to demonstrate that the proposed architecture is able to perform the
above-mentioned mapping. We covered the visual space with two-dimensional Gaussian response neurons,
and both joint spaces with one-dimensional Guassian response neurons. A training set and an independent
test set were generated by randomly populating the visual and joint-angle spaces and determining the tar-
get response using Eqns. (9.2) and (9.3). In accordance with [3], the basis functions are constructed by
taking all possible combinations of elementary functions and calculating their product. Since the activity
of these neurons is highly correlated, Principal Component Analysis (PCA) was performed first to project
the responses in a lower-dimensional space where the activations are uncorrelated. The network output is
then determined by linearly combining these basis-function responses. The weights were set to the linear
least-squares solution.
The network outputs on a training set of sample size 50 and an independent test set of size 50 are shown
in Fig. 9.5. It is clear that the proposed network adequately approximates the mapping. The mean squared
errors on training and test set are 3.38 × 10−4 and 1.14 × 10−3 respectively.
Robot Experiment
In a realistic situation, an organism has to learn the integration between visual and proprioceptive signals
in another manner. We developed a realistic manner to generate training data and applied this to a robotic
quadruped system. The robot is shown in Fig. 9.6. Due to the noise inherent in the flow fields and the time
constraints on the project, we opted for a simplified model which demonstrates that our approach is feasable
for learning a response from visual and tactile information. We restricted ourselves to a situation in which
the joint angles are fixed and the complete mapping does not have to be learned.
A reinforcement learning paradigm was chosen to couple visual motion information to the appropriate
response. The input to the system consists of the components of the optic flow field towards the paw. From
118



Neuromorphic Engineering Workshop 2003
Figure 9.6: Quadruped robot with paw touch sensor
2
e
time
Figure 9.7: Second order eligibility trace
this motion field, a second-order eligibility trace (see Fig. 9.7) is constructed which enables the system to
discover the causal relationship between the visual information and the contact with the paw. In the training
phase, learning is initialized when an object makes contact with the paw. At this instant, the weights of an
array of neurons with receptive fields in the visual space are updated. The weights are increased by a factor
of the eligibility trace at the respective location. Once the system has learned, the retraction response is
automatically triggered when a moving object is close to hitting the paw. The response occurs when the sum
of the visual input, weighted by the learned weights, exceeds a threshold. To make the system less sensitive
to noise, the weights gradually decay after each learning instance. Furthermore, the weights are normalized
to trigger a winner-take-all mechanism. Fig. 9.8 shows a screenshot of the system in action.
Future Work
Future work consists of combining the adapted version of Pouget’s model with the proposed reinforcement
learning paradigm. With the added ability of learning the foot position, the resulting system will be able
to avoid obstacles while walking. Additional issues need to be considered here however, such as e.g. the
(structured) distortion of the motion field due to head movements and looming effects.
119


Neuromorphic Engineering Workshop 2003
Figure 9.8: Screenshot of the system in action. The first column shows the camera input, the second column the
motion components towards the robot, the third column shows the first- and second-order eligibility traces and the
learned weights and the fourth column shows the optic flow field.
120

Bibliography
[1] Graziano, M.S. & Gross C.G. (1998) Spatial maps for the control of movement. Curr Opin Neurobiol.
8(2):195–201.
[2] Graziano, M.S., Cooke D.F. & Taylor C.S. (2000) Coding the location of the arm by sight. Science.
290:1782–1786.
[3] Pouget, A. & Sejnowski, T.J. (1997) Spatial transformations in the parietal cortex using basis functions.
Journal of Cognitive Neuroscience. 9(2):222–237.
121

Chapter 10
The Computing with liquids Project Group
Project Leader Robert Legenstein
This project deals with a novel model for cortical processing, called the Liquid State Machine (LSM).
The main idea is described below. In this model, a recurrent circuit of spiking neurons acts as a dynamical
system or nonlinear filter to produce diverse responses on the input function. A simple memoryless readout
neuron can be trained to map such respones (liquid states) onto target outputs.
In a discussion, Nici Schraudolph suggested the use of learning algorithms to facilitate the diversity of
responses within the circuit. A single neuron could be locally trained to have responses that are maximally
decorrelated from its inputs (other neurons in the recurrent circuit). Within the Multimodality workgroup,
the idea arose to use Liquid state machines to integrate different sensor modalities. As a future work, this
could be implemented on a mobile robot with various sensors.
Simulating recurrent neural circuits in software is time consuming. Using neural circuits Software sim-
ulation of recurrent neural circuits are time consuming. Using neural circuits implemented in analog VLSI
circumvents this difficulty because such circuits perform complex operations in real time. The aim of the
project Using an AER recurrent chip as a liquid medium is therefore to check the applicability of the con-
cept on aVLSI circuits. Furthermore, properties and information content of the chip output can be explored,
therefore serving both sides. For the chips used here, we tested the ability of the system to extract non-linear
information from a set of inputs. The results suggest that fast dynamical responses limit the ability of extract
integrated temporal information.
The Liquid State Machine Framework
The conceptual framework of a Liquid State Machine (LSM) facilitates the analysis of the real-time comput-
ing capability of neural microcircuit models. It does not require a task-dependent construction of a neural
circuit, and hence can be used to analyze computations on quite arbitrary “found” or constructed neural
microcircuit models. It also does not require any a-priori decision regarding the “neural code” by which
information is represented within the circuit (See also [1], [2]).
Temporal Integration The basic idea is that a neural (recurrent) microcircuit may serve as an unbiased
analog (fading) memory (informally referred to as “liquid”) about current and preceding inputs to the
circuit.
122


Neuromorphic Engineering Workshop 2003
Figure 10.1: The Liquid State Machine (LSM). The recurrent microcircuit (liquid) transforms the input into states
x(t), which are mapped by the memory-less readout functions f1, . . . , fn to the outputs f1(x(t)), . . . , fn(x(t)).
The “liquid state” We refer to the vector of contributions of all the neurons in the microcircuit to the mem-
brane potential at time t of a generic readout neuron as the liquid state x(t). Note that this is all the
information about the state of a microcircuit to which a readout neuron has access. In contrast to the
finite state of a finite state machine the liquid state of an LSM need not be engineered for a particular
task. It is assumed to vary continuously over time and to be sufficient sensitive and high-dimensional
that it contains all information that may be needed for specific tasks.
Memoryless readout map The liquid state x(t) of a neural microcircuit can be transformed at any time t by
a readout map f into some target output f(x(t)) (which is in general given with a specific representation
or neural code).
Offline training of a readout function It is possible to train a memory-less readout to produce the desired
output at time t. If one lets t vary, one can use the same principles to produce as output a desired time
series or function of time t with the same readout unit.
10.1
Using an AER recurrent chip as a liquid medium
Leader Robert Legenstein
Participants: Steven Kalik
Ahead of this workshop, the concept of liquid state machines was merely used in software simulations.
In this workshop, we grasped the oportunity to test the model on an analog VLSI chips that models neural
123


Neuromorphic Engineering Workshop 2003
Visual
Silicon AER
Silicon AER
Logic
PC
Stimulus
Retina
V1
Analyzer
Figure 10.2: A schematic of the setup. Moving gratings were presented to the silicon retina. The silicon retina projects
to the V1-chip (by AER). The ouput of this chip is sent to a logical analyzer (by AER). The data is analyzed on a PC
in an offline mode.
Figure 10.3: A grating of spatial frequency 1.5 (periods per imagewidth).
circuitry. The use of hardware in this context has several advantages compared to software. First of all,
hardware operates in realtime, which allows us to explore circuits with a large number of neurons. In this
case, the chip implemented 9216 neurons. Another important advantage is that one can easily take real
world data represented by spike trains. Furthermore, hardware implementations of neural circuits are ”dirty”
in their response and noise properties. Such behaviour is probably more closely related to the operational
mode in biological circuits than simulations are.
We used a vision chip by Paul A. Merolla (see [1]). A visual stimulus (drifting grating) was presented to
a silicon retina (Kwabena Boahen, see [1]). This chip in turn projected its output onto a vision chip which
models orientation selectivity in V1. The ouputs of this cortical chip were then sent (by the use of AER) to
a logic analyzer which recorded all spikes that occured. The internal dynamics of the chip were used as the
liquid medium from which information could be extracted. In particular, we were interested in the direction
of movement and spatial frequency of the stimulus. Linear regression and Fisher’s Linear Discriminant
analysis were applied to the data in order to learn the target function.
The setup
Figure 10.2 summarizes the setup we used.
As input stimuli we used moving sinoidal gratings of different spatial frequencies, temporal frequencies,
and directions. One such stimulus is shown in Figure 10.3
124

Neuromorphic Engineering Workshop 2003
1.5
1
target
target
train
test
0.8
1
0.6
0.5
0.4
0.2
0
0
−0.5
−0.2
−0.4
−1
−0.6
−1.5
−0.8
−2
−1
0
1000
2000
3000
4000
5000
6000
0
200
400
600
800
1000
1200
time (msec)
time (msec)
(a) Output of the linear neuron
(b) Thresholded output of the lin-
on training data (red). The target
ear neuron on test data. The target
function is shown in blue. Note
function is shown in blue.
that outputs greater than zero are
classified as 1 and others are clas-
sified as −1.
Figure 10.4: Results after learning on spike counts in 1 millisecond time bins.
Extracting direction information
The target was to distinguish between two different directions, specifically leftward and rightward movement
of the grating. All gratings had spatial frequency of 1 and temporal frequency of 2 Hz. Since the readout
element is memoryless and has access only to one temporal snapshot of the system’s state at a time, this task
requires that there is some temporal integration within the circuit.
A linear readout neuron was trained on this task using Fisher’s linear discriminant. For more information
on issues of learning, we refer to Section 3.3. The training data consisted of five presentations of the stimulus
in each direction. Each presentation lasted for about 700 msec. Spikes in one millisecond bins (also 10 ms
bins) were counted. This resulted in a total of about 7000 training examples (700, respectively). Three
hundred fifty neurons were chosen randomly out of active ones to provide the liquid state (see also Section
3.3).
The linear classifier was then tested on one leftward and one rightward stimulus of about the same length.
To successfully discriminate between these stimuli, it is crutial to generalizate across different presentations.
For this purpose, the 5 training presentations available in each direction comprise a very small data set.
Training on ten millisecond bins did not succeed. Althoug the training error was small, the classifier
showed practically no generalization ability. This result suggests that the temporal dynamics of the chip act
on a much faster timescale than biological circuits. We therefore proceeded with one millisecond bins. For
different choices of the training and test set, the training error was inbetween 8.3 and 9.8 percent. This means
that around 9 percent of all snapshots were classified wrong. For the test set, we could achieve an error in
between 25 and 29 percent. Althoug this result is not overwhelming, it shows that there is some temporal
integration on a fast time scale within the recurrent circuit of the chip. Figures 10.4(a) and 10.4(b) illustrate
the perfomance on train and test data.
Another way of representing the spiketrain is to convolve the spike train with some kernel function. For
this convolution, we used an exponential decay function with a time constant of τ =30 milliseconds. This
distributes fractional amounts of each spike over several one millisecond bins in a row. Such a distribution
125

Neuromorphic Engineering Workshop 2003
3
target
0.3
test data
2
0.2
1
0
0.1
−1
0
−2
−0.1
−3
−4
−0.2
−50
200
400
600
800
1000
1200
time (ms)
0
50
100
150
200
250
300
350
(a) Output of the linear neuron on
(b) Weights of linear discriminator
test data.
The target function is
after training.
shown in blue.
Figure 10.5: Results after learning on convolved spike trains in 1 millisecond time bins.
mimics the effect of a spike train on the membrane potential of a post-synaptic cortical neuron. Note that this
approach induces some “artificial” memory into the system, which is not a reflection of short term memory
in the chip.
This approach significantly improved the performance of the classifier. With this technique, we could
reduce test error down to 4.5 to 8 percent. The training error practically vanished. Figure 10.5(a) shows the
output of the linear classifier on test data. Figure 10.5(b) shows the weights of the readout element. Note that
no specific pattern can be recognized. The information is distributed across all 350 neurons that constitute
the liquid state.
Conclusions and future work
The work with silicon implementations of neural circuits was a new experience for us. The way to our results
was therefore not as direct as it might appear.
It turned out that the AER chip computes in a dynamic regime that is quite different from that of bio-
logical circuits. The main problem was that short term temporal integration (short term memory) was weak,
at least in the parameter regime we considered. A rigurous analysis of the temporal properties of the chip
within different parameter settings should be considered for future work.
Good performance on position invariant discrimination of spatial frequencies suggests considerable non-
linear kernel properties of the chip. Unfortunately, this stream of investigation could not be continued due to
time limitations.
One ambitious goal of this workgroup was to use the setup for the analysis of natural scenes. This
interesting project should also be considered in future.
Another interesting project was suggested by Kwabena Boahen, to apply a 40Hz signal on the bias line
of the cortical chip. This signal loosely models an oscillatory local field potential in the cortex, and has been
hypothesized to facilitate synchronization of cortical activity. Such a project would also make interesting
future work.
Special thanks go to Paul Merolla for his indispensable support.
126

Neuromorphic Engineering Workshop 2003
References
[1 ] http://www.neuroengineering.upenn.edu/boahen/people/paul
[2 ] http://www.neuroengineering.upenn.edu/boahen
[3 ] http://www.lsm.tugraz.at
[4 ] W. Maass, T. Natschlger, and H. Markram. Real-time computing without stable states: A new
framework for neural computation based on perturbations. Neural Computation, 14(11):2,
2002.

127

Chapter 11
The Bias Generators Project Group
Project leader Tobi Delbruck
Leader Tobi Delbruck
Participants: Guy Racmuth
Ryan Kier
Bernabe Linares-Barranco
Andre Van Shaik
This was a short but dense project, that focused on advanced analog VLSI design teqniques. The goal
of the project was to design a silicon compiler for automatically generating layout blocks that implement
bias-generators.
Experimental analog VLSI chips depend on a set of parameters (biases) that determine the operating
points of the circuits. The correct values of these biases are sometimes not known before the chip is fabricated
and must be found by trial and error. This design style results in systems that sometimes are not very robust
in operation because they depend on careful adjustment of these biases. This has hindered development of
neuromorphic aAVLSI technology for commercial application because this fiddling with biases can result
in systems that are demonstrable but not manufacturable in large quantities. A manufacturable chip must
be essentially self-contained–like an opamp or microcontroller–and must not require on a set of externally-
applied bias voltages that must be critically set for its operation.
In our experience, a system with practical application will not be developed by a commerical partner
until it not only can be demonstrated to work robustly and but also shown that the system is manufacturable.
Development of manufacturable versions of neuromorphic aVLSI chips has been hampered by the lack of
knowledge and uncertainty about how to make reference generators that can be readily applied in chips. The
aim of this project was to make a design kit to make it much easier and quicker to make standardized bias
generators.
The bias generator workgroup put together a set of layout cells and layout compiler to build bias current
generators. We have made this package generally available at http://www.ini.unizh.ch/˜ tobi/biasgen and
placed its contents under CVS for its continued evolution.
These bias current generators generate a set of reference currents. These known currents are used to
power amplifiers, set time constants, pulse widths, etc. The generated currents are independent of process
variations and power supply voltage over a fairly wide range.
128

Neuromorphic Engineering Workshop 2003
To access the detailed notes on this design kit, plaese see the web page given above.
129

Chapter 12
The Swarm Behavior Project Group
Project Leader Mark W. Tilden
This workgroup assessed various ”in vivo” swarm experiments with a two-hundred unit colony of au-
tonomous BIObug commercial robots. The purpose was to take time-lapse video in controlled conditions
to assess general emergent behaviors. Swarm tests were done in fifty-unit uniform groups to show standard
dynamics (dispersion, territoriality, ”Food” signal phototaxis, and remote recognition), then responses were
tested where robots had to defend their infra-red ”food” source against predation from other robot ”species”.
Large mixed-group colonys were also filmed, and finally all robots were put through a grueling ”Lemming”
test where they were marched down the schoolhouse steps to show resilience (and for fun).
Reviewing the video at ten-times normal speed implied that, despite uniform programming, small changes
in robot morphology can elicit large-scale behavioral differences. For example, territorial behaviors were
greatly affected by the shapes of the legs, herding abilities were significantly affected by the angle of the op-
tical receivers, even variations in the whisker lengths affected how and why the robots assembled in clusters.
Results primarily showed just how well anthropomorphic robots might match their biological counter-
parts despite lack of advanced programming. The implication is that physical characteristics have a non-
trivial influence over software in terms of long term autonomous survivability.
And for real swarm dynamics, nothing beats a group of neuromorphic scientists during a free hundred-
bug giveaway.
Details available on request.
130



Neuromorphic Engineering Workshop 2003
Figure 12.1: A swarm of fifty BIObugs under dispersion test
Figure 12.2: Multi-species colony interaction run
131


Neuromorphic Engineering Workshop 2003
Figure 12.3: Lemmings down the stairs
132

Chapter 13
Discussion Group Reports
13.1
The Present and Future of the Telluride Workshop: What are we doing
here?

Moderator: J¨org Conradt
Participants: Most of this year’s organizers, staff and participants
This discussion group offered a chance for participants to suggest changes, improvements, or to simply
criticize issues during this year’s workshop. The feedback will help the staff and organizers to improve the
workshop in upcoming years. The whole session mainly emerged out of key-words and brain-storming by
participants; and this report will reflect that.
Lectures
• Too long, most presentations used 1.5h for presenting and expected additional time for questions.
Ideally, presentations should be scheduled for 1h - 1:15h max and leave the left-over time for questions
• It’d be good to have short abstracts of lectures available
• Suggestion: Fewer talks and more time for project work
• Especially this year was problematic with the ”Computational Neuroscience Workshop” in parallel
during the first week: Many people were missing introductory lectures before the more detailed senior
lectures. However, the existence of the workshop in parallel to our workshop was highly appreciated
Workgroups
• Too many workgroups offered (took time from other event as discussion groups)
• Ideally fewer initial workgroups and provide spare time for creating work groups spontaneously
• Initially no clear guideline or criteria which workgroups to attend
• Projects regarded as extremely important to deeply understand knowledge from the lectures
133

Neuromorphic Engineering Workshop 2003
• More space for work groups (eg. an additional room, or meet in the hall-ways)
• Interaction between workgroups mentioned positively, but further interaction (joint projects, etc) is
needed
Discussion groups
• Great the discussion groups are informal
• Announcements should be placed on walls (not just online)
Preparation before the workshop
• No tutorials before the workshop
• More explicitly announce what equipment is available and that space requirements have to be re-
quested beforehand
• Travel information useful, but more elaborate information about Telluride and its character (both, town
and workshop) appreciated
• Mentor program: One staff member or returning participant should be assigned as a personal mentor
to all new participants. So people have someone they can informally ask questions by email which
seem to irrelevant to email/post to everyone
• Possibly assign the first two or three days as intense background training
Time and Place
• Three weeks seems the perfect time. Shorter does not allow to explore all facets of the workshop,
more time seems too much
• The location is recognized as an extremely important factor for the success of workshop. Places that
offer ’distractions’ will not support the community in staying in close interaction
• Telluride is perfect as it offers an intense environment with very little extra work needed (eg no car
rentals required, etc)
• Workshop in the US / Europe? The current funding situation only allows having the workshop in the
US, but no general reason against Europe
Topics of the Workshop
• The individual workgroups/projects getting more advanced and thus more complicated. More expert
knowledge is required to participate
• Extreme diversity of areas, participants get exposed to a lot of different topics
• A stronger focus on special topics (eg. aVLSI or robotics) not desirable
134

Neuromorphic Engineering Workshop 2003
Size of the Workshop
• The total number of participants seems right
• Lectures are encouraged to stay for a week at least, preferably during the whole workshop
• Maintain a high percentage of applicants (typically students)
• Currently there rather is a high number of regular attendees
Tutorials / Scientific advise during the Workshop
• Individual tutorials on specialized topics will require too much time
• Suggestion: A list of experts (one to two topics per person) should be announced, such that people
searching for advice can talk to someone
• A general tutorial on ”How to interface to the world” was requested by many participants. Topics
include basic electronics, microcontroller and input/output possibilities to interface self-made systems
(chips, robots, etc) to computers
Most participants mentioned that they most appreciated the Hands-on character of the workshops, but
there should be more time devoted to hands-on work
13.2
Trade-offs Between Detail and Abstraction in Neuromorphic Engineer-
ing

Project Leader R. Jacob Vogelstein
A roundtable discussion was organized around the topic of the trade-offs between detail and abstraction
in neuromorphic engineering. The discussion centered around a “panel of experts” selected from the distin-
guished faculty, and was coordinated by a moderator. At the start of the discussion, the moderator instructed
panel members to briefly describe their positions on the subject, after which a lively debate ensued. Students
and faculty in the audience participated throughout the discussion, although the panel had more than enough
material to continue the debate on their own for hours. A description of some of the highlights appears in the
following section, along with a thorough analysis of the proceedings from the perspective of one audience
member and a detailed summary of one panel member’s position.
Highlights
Moderator: Andre van Schaik
Panelists: Rodney Douglas
Avis Cohen
Barbara Webb
Kwabena Boahen
135

Neuromorphic Engineering Workshop 2003
The discussion opened with Barbara Webb (BW) describing four types of trade-offs that are made in
modeling: abstraction vs. detail, simple vs. complex, high-level vs. low-level, and approximate vs. accurate.
She proposed that all models fall somewhere on these spectra, with the exact location depending on one’s
purpose for creating the model. To illustrate and expand on this point, Kwabena Boahen (KB) suggested that
a for a neuromorphic model, if one believes one knows how the brain works, one can move toward a more
abstract model; on the other hand, if the brain is indeed a mystery, concentrating on biological details may
be preferable.
With his response, KB also opened up a new line of discussion. Personally, he said, he does not believe
we know how the brain functions. One way in which this manifests itself in computational models is that
we tend to model the brain using computers that are based on precise calculations, whereas the brain uses a
large number of imprecise calculations to perform its calculations. A natural conclusion is that the models
we make are inherently flawed, and it is possible that we would never achieve a true understanding of neural
function if we always pursue it in this way. In contrast, if we construct models that are true to the biology,
with all of its imprecision, we may have a better chance at understanding the brain.
Taking the opposite perspective, Avis Cohen (AC) pointed out that attention to detail can be a distraction,
depending on the purpose of the model. For example, in models of a Central Pattern Generator, it is probably
capturing the properties of the oscillations that is essential, not the specific biophysical properties of the
neurons. Furthermore, in many cases the details are either unavailable or deceiving — we may think we
know the details but neurons are so complicated that we never know the whole story — and including these
in a model can make things much more difficult without adding much functionality.
Rodney Douglas (RD) also initially disagreed with KB, but for a different reason. He suggested that
his primary concern is in having a method. His methods include visualizing systems as set of nodes and
connections, so regarding the axis of detail versus abstraction, he doesn’t necessarily care what is inside
each node of the system, but rather what states the nodes represent and how the system evolves from state
to state. He called this perspective the “pragmatist” view, and with this statement initiated a new point for
debate.
The first person to respond to RD was BW, who countered with an assertion that it is already impossible
to eliminate an experimenter’s bias, and trying to model a system starting from a preconceived notion of
how it should be organized further limits one’s vision and predictive capabilities. Replying, RD proposed
that without an inferential bias, we may not be able to predict outcomes from our models or understand our
modeling “results”. For example, if a model is founded on structural accuracy, how does an experimenter
evaluate success? In his pragmatist view, RD knows his models are successful when they achieve the desired
behavior — regardless of the what’s inside the nodes. Of course, this argument only holds if one believes
that there is nothing “sacred” about neurons per se, that is, if it is possible to build a brain out of a variety
of computational primitives as long as the computations themselves remain the same. If there is some-
thing fundamental about computational neural networks that prohibit them from exhibiting neural functions,
then structural accuracy and attention to biological detail is arguably essential. But why should neurons be
special?
As a possible answer to RD, KB suggested that it’s not necessarily the fact that neurons are somehow
privileged computationally, but if we try to constrain our models by the same constraints facing biology, we
may be more successful at understanding the brain. Moreover, the brain developed its structure and function
in tandem, and the direct costs of computation were important forces in that development — it may be
impossible to understand why the brain does what it does if we place it in a different framework, such as that
of a computational neural network. So in some framework, a Turing machine may be a universal computer,
but in another, the brain might be a universal computer. Looking at modeling from this perspective, it makes
136

Neuromorphic Engineering Workshop 2003
sense to include the details and maintain the essence of computations in the brain. Finally, neuromorphic
engineers who use aVLSI are uniquely suited to this task... but this is the subject of another discussion.
The debate continued along these lines for some time, but eventually there were semantic issues to be
discussed: What is “detail”? What does it mean to “understand” something? When does a model “explain” a
behavior? These questions arose from comments made by audience members, and our panelists agreed that
replicating the input-output behavior of some system is not necessarily an adequate explanation of how the
system works, nor is being able to build an analog of the system. These issues are deep and complex, and
the group reached no real consensus on them.
In the end, each panelist was asked to make a closing statement, and they mostly agreed to disagree.
RD finished with the question of why we cannot currently explain most of what we want to understand in
the brain using a simple integrate-and-fire model of neurons and simple bandpass filters. Is there something
wrong with our computational primitives? And if not, then why bother with a Hodgkin-Huxley model when
we can’t even get the simple models to work? AC closed with an example: neuromodulation is an important
principle in biology, but the details of how we implement neuromodulation may not affect our physical
model. If we leave out certain details and the model doesn’t work, we can always go back and include them
later, but leaving out details can sometimes make the intractable, tractable. KB asserted that if one does not
know how the brain works, a good way to explore it is to build a very detailed model and then try to figure
out how it works. Finally, BW concluded that an “explanation” must be a machine that maps parts of the
model onto the underlying biological structure.
The moderator, Andre van Schaik, closed the discussion with some final thoughts: regardless of the
approach, we all strive to build stable dynamical systems and whatever details we choose to include, they
should not matter enough to significantly affect the model for small changes in the input or parameter space.
Furthermore, it is healthy and productive for our community to work at all levels of this debate — as neuro-
morphic engineers, we must discover what advantages are unique to our approach, and we can only achieve
this goal by exploring and experimenting.
A Dynamical Systems Interpretation
Written by: Marshall M. Cohen
On Saturday, July 12, a very interesting panel discussion was held on “Reduction” in modeling. The
purpose of this note is to give a dynamical systems view of the modeling of neuromorphic phenomena, a
view which I think encompasses much of what the panelists were saying from different viewpoints.
Nothing I say here is new; and I am neither an engineer or a biologist, but rather a mathematician. But
I was struck that this dynamical systems view was not explicitly brought up in the discussion, so I hope this
exposition will be useful,
The parts of this discussion are
1. The State Space
2. Judging a Model
3. The Role of Detail
The view given below is meant to apply whether the system being modeled is (for example) a single neu-
ron, the cortex or a walking human being and whether the model is mechanical (e.g., a robot, computational
(a computer program) or mathematical.
137

Neuromorphic Engineering Workshop 2003
The State Space
We view the system being modeled as having a state space: n parameters are chosen to characterize the
system and, when each of these parameters is given a numerical value, a state of the system is prescribed.
Thus a state is a point (x1, x2, ... xn) in n-dimensional space and the state space — the set of all possible
such points for assignable values of the parameters — is a subspace of n-dimensional space. This state space
might be viewed as a continuous region of n-space (when the tools of calculus and differential equations are
to be used) or a discrete subset of n-space (when e.g., recursive methods are to be used and these points are
inputs to computer programs).
Note that the model itself has a state space, either with the same parameters (choosing which parameters
of the biological system are to be considered is key to constructing the model of that system) or with a larger
set of parameters from which the parameters of the biological system can be reconstructed. For simplicity,
the following discussion will assume, unless otherwise noted, that the model has the same parameters as the
biological system.
Note also that one of the characteristics of a state space is often how, starting at one point of the space,
the system will evolve. Evolution of the system is seen as a path or curve in n-space, starting at the given
point and moving through other points — other states. For a state space to carry the information which
tells how any single state will evolve, the points have to carry in some of their coordinates the information
giving a “direction for the flow vector” or information for a discrete stepwise algorithm. We will not use this
technically in the discussion that follows, but it will inform some of the intuition.
Judging a Model
In this setup, I propose that a model is usually judged by the following two criteria:
A) How well does the state space of the model match the state space of the biological system (or map
onto the state space of the biological system if the model has more parameters)?
B) Does the structure of the model — the way in which it is constructed to give a state space matching
the state space of the biological system — recognizably capture the most important elements which one
understands (or might conjecture) make the biological system work?
For example, in A), one might hope that the allowable parameter values for the two state spaces are
roughly the same. In the panel discussion, Kwabena said that he would reject a beautiful working model of
the cortex if it used orders of magnitude too much power. Here a value of one parameter would be so much
out of line that he would reject the model. (Other members of the audience indicated that they would forgive
that parameter, if not sell their souls, for such a model.)
Further, with respect to A), one would hope that the dynamics of the state spaces of the biological system
and the model are close; the evolution from a given state should be roughly the same. For example, if the
biological system behaves periodically when it starts from any point in in a certain region in the state space
(say, for example that each point moves in a round orbit) then one would hope that the model would also
evolve periodically from points in this region, say with each point moving in a closed curve (even if that curve
is not round). One would not want the model to start periodically from a state in this region but eventually
get more and more “unstable” and eventually go off into some totally unpredictable or catastrophic behavior.
In B) the philosophy underlying the phrase “neuromorphic engineering” is brought to bear. The neuro-
biological system is modeled with a system of the same rough (physical or logical) form (“morph”). It is not
good enough to come up with a clever engineering solution which reproduces the state space. (A walking
person should not be modeled by a 4-wheeled vehicle.) The solution should, as much as possible, represent
the current understanding of the essential elements which make the biological system work. (Rodney spoke
138

Neuromorphic Engineering Workshop 2003
of how his schematic models should map onto the logical or computational structure of the biological sys-
tem.) Then, one hopes, on the one hand, that the model can be used to test the understanding of the biological
system. And on the other hand, one hopes that the model — informed by the results of eons of evolution —
can work better than a model uninformed by the biology. The scientist feels that his model is really a model
of what is going on.
The Role of Detail
How detailed then should a model be?
As most everyone at the panel discussion said, “... well that depends”.
Let’s look into what constitutes “detail” and how this relates to the criteria for judgment of models given
above.
It appears to me that the primary measurement of detail is the number of relevant parameters one chooses
to consider (This is “the dimension of the state space” if all the parameters are relevant.). One wants enough
parameters so that criterion B) — the recognizability of the essential characteristics of the model — is
satisfied but not so many that structural clarity is lost. [For example, one can model a running person by
two parameters, distance traveled and time elapsed: not enough parameters to help build a running robot that
runs like a man. On the other hand, if one adds a number for the wave-length of hair color of the runner, one
can build a robot that looks more like the runner, but this doesn’t help if what is is relevant is the process of
running.]
Detail occurs in another way (not the central consideration of the number of relevant parameters men-
tioned above). How detailed should the data one consider be? Here a relevant factor (this was my comment
as a member of the audience at the panel) is that, if one is choosing data points from a region of stability
of the state space — where nearby data points give close orbits and close outcomes after a period of time
or a certain number of recursive steps — then the extra detail is not of much use. One knows “generically”
what will happen. However, if the state space is not well understood then a dense search of states might find
points extremely close to each other with vastly divergent orbits. Here one might have discovered a bifurca-
tion hypersurface in the state space, something which reflects a crucial underlying biological phenomenon.
A common procedure is curve-fitting. The outcome from the biological experiment is a curve on a
screen, or a family of curves for different situations. This family of curves represents the biological behavior
being modeled. It’s a mathematical fact that, for any desired degree of precision, one can (under appropri-
ate mathematical hypotheses) find polynomials of high enough degree to approximate the given family of
curves. The coefficients in these polynomials then become parameters, coordinates in a model state space,
from which one can retrieve with high accuracy the experimental curves. Unfortunately this process, while
succeeding on criterion A) above, usually totally sacrifices criterion B).
In the end we come to a matter of taste:
Does one want the least number of parameters which will give a state space of the model matching the
state space of the biological entity and which picks out the crucial elements which “really make the process
being studied work”?
Or does one want the most number of parameters for which A) and B) are both satisfied, so that the model
maximizes B) and gives the closest match to the biological entity (even at the risk, the previous people would
say, of muddying the understanding of what is crucial)?
I believe the choice between these two options was the essential topic of discussion of the panel. In each
modeling situation, the neuromorphic scientist will have to decide. So we are back where we started. A good
place to stop.
139

Neuromorphic Engineering Workshop 2003
Position Statement
Written by: Barbara Webb
Abstraction concerns the number and complexity of mechanisms included in the model; a more detailed
model is less abstract. ‘Abstraction’ is not just a measure of the simplicity/complexity of the model however
but is relative to the complexity of the target. Thus a simple target might be represented by a simple, but
not abstract, model, and a complex model still be an abstraction of a very complex target. Some degree of
abstraction is bound to occur in most model building. How much abstraction is considered appropriate seems
to largely reflect the ‘tastes’ of the modeler: some think we should aim for simple, elegant models; others
for closely detailed system descriptions. What are some of the pros and cons?
Complex models tend to be harder to implement, understand, replicate or communicate. Simpler models
are usually easier to falsify and reduce the risk of merely data-fitting, by having fewer free parameters. Their
assumptions are more likely to be transparent. Another common argument for building a more abstract
model is to make the possibility of an analytical solution more likely. However abstraction carries risks. The
existence of an attractive formalism might end up imposing its structure on the problem so that alternative,
possibly better, interpretations are missed. Sometimes we need to build complex detailed models to discover
what are the appropriate simplifications. Details abstracted away might turn out to actually be critical to
understanding the system. It is important for the users of a model to be aware of what has been left out,
and why. This is particularly important if the model-builder comes from a background (e.g. engineering)
which may lack long and deep exposure to the subject matter of the system (e.g. neurobiology) they intend
to model.
It is important to note that abstraction is not directly related to the level of modeling: a model of a cog-
nitive process is not, of its nature, more or less abstract than a model of channel properties. The amount of
abstraction depends on how many processes are included, not what kind of processes are modeled. Further-
more, the fact that some models — such as neuromorphic aVLSI — are implemented as hardware does not
make them necessarily less abstract than computer simulations. A simple pendulum might be used as an ab-
stract physical model for a leg, whereas a symbolic model of the leg may include any amount of anatomical
detail. To make some further distinctions: detailed models are not necessarily more accurate as you could
have the details wrong; and abstract models are not necessarily more general as what they describe may
fail to be a good representation of any real system. Finally, the term ‘realistic’, though used by some as a
synonym for ‘detailed’, is used by others to refer to the biological ‘relevance’ of a model, i.e. is it meant to
represent some real biological question or is it merely biologically-inspired in some way? Both abstract and
detailed models can be relevant.
13.3
Bioethics Discussion Group
Leader Rodney Douglass and Elizabeth Felton
Introduction
The Bioethics Discussion Group met twice during the workshop. The purpose of this group was to explore
some of the ethical dilemmas that we as scientists and engineers will face. Descriptions of the conversations
at each meeting are given below. Please note that a consensus was rarely reached on any topic, and so the
notes below do not necessarily reflect the opinions of all members of the Group.
140

Neuromorphic Engineering Workshop 2003
First Meeting - Monday, July 7, 2003
Participants: Rodney Douglass, Elizabeth Felton, Jorg Conradt, Avis Cohen, Cristof Koch, Shana Mabari,
Christy Rogers, Robert Legenstein
The first meeting began by coming to an agreement on what the word ”ethics” means. We decided that ethics
is a science for dealing with issues of morality. Morality was defined as the evaluation of human actions on
the dimension of right to wrong, or good to bad. Morality often rests on the notion of responsible action and
the question is to whom is the responsible action owed.
Three main topics to discuss were decided upon: Brain Machine Interfaces, Military Funding of Research
and Artificial Intelligence. A sub-topic under AI was: How Much Control Should Machines Get? Only the
first two topics were touched upon in this first session.
We started by referring to the June 19, 2003 Nature News Feature article, ”Remote Control,” which was
mentioned in one of the talks during the first week of the workshop and had been posted in the break room for
participants to read. The article discusses neuroengineering research and issues surrounding the acceptance
of military funding.
It was proposed that all of the technologies discussed in the article (such as Brain Machine Interfaces)
are ethically neutral, in so far as they are (so far) unable to take autonomous sentient action, and so cannot be
considered morally accountable. However, the development and application of that technology by humans
is subject to morality and ethics. The question ”What is the responsibility of scientists?” was posed and
examples such as The Manhattan Project were discussed.
The meeting concluded that ethical issues cannot be absolutely decided. However, it is very important
for the safe progress of humanity, that interest and debate of these ethical issues should be promoted. The
meeting accepted the task of promoting such discussions amongst their colleagues, and with the public, in
the context of their home institutes.
Second Meeting - Monday, July 14, 2003
Participants: Rodney Douglass, Elizabeth Felton, Avis Cohen, Marshall Cohen, Shana Mabari, Christy
Rogers, Robert Legenstein, Steve Kalik, Jorg Conradt, Sven Behnke, Xiuxia Du, Nima Mesgarani,
Ralph Etienne-Cummings
The second meeting began with reviewing the definitions of ethics and morality that were agreed upon
in the previous discussion. It was again posed that the development of the technology in and of itself is
probably neutral, but since it is performed by an agent they have to take some responsibility for what’s done
with it.
The focus of this discussion was on Artificial Intelligence. Some questions posed were: ”What are the
implications in relation to ethics and morality? And, Is there a concern at all?” One question concerned the
notion of individual sovereignty. We accepted that such ’sovereignty’ is an individual right, and the degree
of infringement of this right was one possible moral test of an action. Any artificial intelligence should
sovereignty. The factors supporting sovereignty, and questions of who or what may expresses sovereignty,
were slightly explored. For example, we discussed such issues as : ”What do we think we are as intelligent
humans?, What has brought it about? and Why do we think we have this special right?”
We considered the question of whether is possible in principle to create a form of AI that can control
our lives. We noted that simple forms of such AI already exist. For example, GPS based controllers land
airplanes, and their appropriate actions affect the lives of millions of humans per day. That these devices
141

Neuromorphic Engineering Workshop 2003
are not morally accountable, seems to turn on their reactive natures, and their lack of autonomous decision
making (or intention).
Another test-case of morally relevant technology are sensor-implants that are already used to directly
monitor the physiological states of soldiers. Although many individuals (such as athletes, and diabetics)
would be happy to accept these implanted sensors because they contribute to the subjects quality of life,
the question arises whether such detailed data could be used for direct control of the subject, and so a
violation of sovereignty. For example, the military plans to use such sensors to evaluate the performance of
soldiers during combat. It is a simple step to extend the functionality of these implants to include effectors
such as micro-infusion pumps for delivery of drugs, and stimulating electrodes that could be located in
reward/punishment structures. If the sensor-effector loop is beyond the control of the subject, such implants
have a profound potential for abuse. Thus, implant technology appear to be the start down a dangerous
road to the misuse of innovation for one group to control the behavior of another. However, we noted that
television already provides a technology that is used to affect what people believe, and so a precursor of the
threat raised by implants is already being experienced in society. A counter argument to this discussion was
that society at large still maintains sovereignty, in that current technologies are unable to directly induce our
harm or death by its own (autonomous) decision.
We discussed whether or not forms of AI can have their own morality, and ethics. Some questions posed
were: ”Can ethics be discovered? That is, is ethics of natural kind, and so can be learned by observation. If
so, can an AI device learn ethics and abide by it? Alternatively, Are ethics given to us by God? Or, are they
arbitrary?” In these latter cases, ethics must be imposed on AI devices, and the question arises with this is
possible: and if possible, is such implantation reliable. We recognized that if ethics is not given by God, and
so absolute, then ethics is a consequence of group behavior. And so, what is ethical in one society may not
be in another. Presumably, such moral relativity would pertain also to AI.
The discussion concluded with members giving suggestions about the best ways to evoke discussion and
education amongst the public about our research and its potential implications. Proposals, such as forming
discussion groups with our collaborators, talking to student groups and getting involved with activities such
as Brain Awareness Week were discussed.
Third Meeting - Thursday, July 17, 2003
Participants: Elizabeth Felton, Brian Scassellati, Shana Mabari, Jorg Conradt, Kerstin Preuschoff, Barbara
Webb, Jochen Braun, Tim Pearce, Chuck Higgens, Mitra Hartmann
The group met informally over dinner to discuss some of the ethical issues that Brian Scassellati has
encountered in his robotics research.
13.4
The Future of Neuromorphic VLSI
Leader: Chuck Higgins
The purpose of this discussion group, which met only once for less than an hour due to scheduling
constraints, was to explore the future of Neuromorphic VLSI, both as a medium for biological modeling and
as a commercial product technology. The discussion group leader took the position of advocatus diaboli in
order to generate debate.
142

Neuromorphic Engineering Workshop 2003
Due to the compostion of the group, the discussion mainly centered around neuromorphic VLSI as a
biological model. After much discussion, we arrived upon the following areas in which neuromorphic VLSI
is extremely valuable in biological modeling
Exploration of computation with imprecise elements. Analog VLSI is an ideal medium in which to ex-
plore the ideas of collective computation among many imprecisely matched elements; there was gen-
eral agreement that there is a valid analogy to neural computation.
Recurrency. In the case of a spike-based recurrent system, digital computer simulations can become dif-
ficult due to the complex timing issues involved. Neuromorphic (not necessarily analog) VLSI is an
excellent medium in which to experiment with such systems in real time.
Complex systems. While neuromorphic systems in the past have not been terribly complex, it is when these
systems reach a high level of complexity that the greatest benefits are attained.
Real-world interaction. Whenever a system has to interact with the real world in real-time, computer-based
implementations can be insufficient.
Continuous-time systems. The implementation of a continuous-time biological algorithm on a continuous-
time VLSI medium is clearly superior to a discrete-time simulation with its potential of temporal
aliasing and other artifacts.
13.5
Practical Advice on Testbed Design
The goal of this discussion group is to generate some practical advice on testbed design used to test chips.
The discussion centered around a PCB implementation, and some issues in selecting components to decrease
noise, and building a way to efficiently aquire and generate Analog IN/OUT signals. Some comments raised
in the discussion are:
• A netlist is needed for some of the better PCB generation companies. The file is generated by a Gerber
file.
• It was highly recommended to use Xlinix FPGAs to control digital circuitry.
• An easy way to efficiently get signal into and out of the chip are ribbon cable. Their cross talk is
minial, and is simplifies layout of the PCB board. For Analog In signals, A/D converters can be used
if interfacing to a computer. For analog OUT, D/A converters are the best.
• A cheap source of important boards is a company called CBM. The best way to interface with these
boards is write code in C++ and operate on DOS, because it helps in realtime operations.
• One suggestion is to standarize board design for all chips, such that you always use a certain pin
for power and ground, certain ones to analog, and others to digital. It is always possible to create a
PCB board which interfaces to the standard one if the design requires flexibility, but it does eliminate
generation of big general board de novo every time a chip needs testing.
Overall, the discussion was extremely dynamic and was continued in the lab, with people describing their
chip, its needs and constraints, and the board they designed to accomplish this goal. I believe that this
discussion group should be carried out every year to help orient new students to the demands of aVLSI
design other than the chip itself.
143

Neuromorphic Engineering Workshop 2003
13.6
Teaching Neuroscience With Neuromorphic Devices
This discussion group took place on July 11, 2003. The following are the meeting minutes:
Tony opens conversation by listing four important problems:
1. How can a neuron generate behavior?
2. Economics – when you use an artificial system in the school, it may be cheaper to use a synthetic
device than to sacrifice animals or to use some other technique
3. Importance of neuroscience to the public – most highschool kids can name a bone in their body, but
probably can’t name a brain region.
• People who finish highschool are the same ones who are voting, and they won’t be able to make
smart decisions about funding for neuroscience research.
• Also influences nature vs. nurture argument — laws – when can the government can intervene
in a child’s life.

4. Learning techniques – in the traditional school system, you have to be the kind of person who can
acquire info through hearing and writing. Other people are kinesthetic learners, and they tend to fall
through the cracks and think they are “not good students”
Steve - Take advantage of the ’gee whiz’ factor - get them hooked in high school, or even earlier than
high school. The key point is to generate interest and excitement towards the sciences. The goal is to awaken
a passion for discovery and understanding. Whenever you expose kids to neuroscience - even if they don’t
learn anything, they’ll be excited, and it may influence major and career options later.
Avis - There’s a potential danger in discussing neurons to behavior because we don’t necessarily know
the relationship. Things are more complicated than just neuron to behavior, there’s also the environment and
the mechanics. A neuron doesn’t generate behavior, there’s other things too.
Barbara - Braitenberg’s vehicles book is a very useful approach to teaching neuroscience because you’re
immediately seeing the relation between neurons and the behavior of the creature. Can see that they work in
simulation.
Tony - Simon’s “Sciences of the Artificial” – Simon talks about ants. What generates the nervous system
that generates behavior could be very simple. But when you put the ant in a complex environment, seems to
exhibit complex behavior. The apparent complexity of its behavior may actually be a result of the complexity
of the environment.
Guy - I was a tutor at a very good high school in Boston, and there was only one page in their biology
textbook that listed parts of the brain. The medulla, cortex, cerebellum, midbrain each got one sentence.
Giacomo - We should distinguish between two separate issues: (1) We should teach neuroscience by
making it appealing to young students (e.g. it is a mystery, one of the few unexplored territories, how do we
compute, what is consciousness, etc.) (2) We should get people excited using robotics and hands on demos
(less neuroscience, more neuromorphic engineering).
Barbara - Braitenburg addresses this very well - he sneaks up on you the relation between neurons and
behavior.
Steve - There are other ways of teaching kids about neuroscience - there are outreach programs that last
about a week. For example, the Society for Neuroscience runs Brain Awareness Week, where neuroscientists
144

Neuromorphic Engineering Workshop 2003
teach a science class about sensory, motor, and neural systems, and more generally about the structure and
function of the brain. Begin by touching a real brain, followed by 30 -40 minutes of discussion- the ”gross”
factor really gets kids hooked at the beginning. If you have a robot, you somehow want to link it with the
”gross” factor. This ”gross” factor has two benefits: (1) it creates a memory ”hook” that facilitates long term
recall, and (2) it links the world of technology engineering and designed systems to the world of biology.
This will be important in the long term, particularly as society starts taking on additional responsibilities
through biological and biomedical engineering.
Tobi - Introduces his chip the ”physiologist’s friend.” - he has had a hard time getting this accepted by the
neuroscience community for reasons that are unclear to him. Three big issues in neuroscience: Reduction,
refinement, replacement.
Giacomo - In Zurich there’s the Brain Fair outreach every year that consists of a two day exhibition ,
showing advances in neuroscience, chips, robots, physiological measurements, etc.. There are also ”open
days” where students from high-school come to visit our labs.
Barbara - in the UK, whatever you do, it’s important to offer teachers a lesson plan. Show them - ”here
are the experiments you do, etc.” You can’t just hand teachers a box and expect them to know how to use it.
Avis - Have to ”break-in” (in a good way) kids who might be intimidated by touching. One problem is
that kids are often afraid to touch things.
Steve - The touching factor is very interesting, because the kids who were most grossed out by touching
the brain were the ones who were the most engaged later on. In particular, I’ve made presentations where
kids dissected a (freshly) preserved cow’s eye. Before the dissection starts I compare the structure of the eye
to the parts of a camera. Although the kids are often concerned about touching the eye, it’s amazing to watch
their transition from ginger dissection of the eye to excited interest in reading text that’s enlarged by viewing
through the lens they’ve just extracted. Suddenly things go from ”Yuck” to ”Cool!” This strong emotional
experience results in strong engagement. The next step after examining this ”front-end” could be to engage
students in thoughts about the ”film” of the eye, the retina, which produces different signals sent to the brain.
(www.ini.unizh.ch/ tobi/friend/ ). - describes the cells that are implemented on the Physiologists Friend:
Horizontal, Bipolar, Ganglion, Simple, etc. The chip is a fairly accurate simulation of retinal cells and
cortical simple cells. It’s very nice because it’s very simple - it’s stand-alone, and has only one control (for
volume). Has 800 transistors, 7 phototransducers, and is meant to be a replacement animal. So far they’ve
sold 8 at $500-$700 each, it costs them $400 to produce. This is because the chips in it are expensive -
uses a $200 chip. . Have developed a teaching guide to use with the chip as well “this is important for
inexperienced users. However, there have been a number of hassles to deal with because you have to deal
with returns, repairs, etc.”
Tobi - shows something for people who don’t want a chip - it’s a Java Web Start version of the Physiol-
ogist’s Friend, has automated updates. Daniel Kiper and Harvey Karten are using it for training students as
homework how to plot receptive fields and how to recognize different cell properties. Software is so easy to
maintain and distribute, especially this way.
Steve - Kids all know how to use computers so this is a great tool.
Tobi - One exercise is to give the kids mystery cells — it’s very hard to determine what stimulus they
respond to. Any hardware is a pain.
Barbara - The trouble is that software doesn’t seem as real as hardware.
Tobi - Kids are into computer games. So you should tap into that.
Steve - Software is something the kids can handle themselves.
Tobi - How about hardware for teachers, software for students.
Barbara - Interaction with the environment is critical. When teaching perception, kids are given inter-
145

Neuromorphic Engineering Workshop 2003
active exercises and they have to figure out for themselves what is going on. They enjoy that, because it also
shows them something about themselves, not just visual illusions. Second point – schools are strapped for
cash.
Giacomo - Hardware for teachers, software for students partially solves the money problem.
Tony now asks: What is the single most important thing about neuroscience that students
should learn before leaving high school or university.

Avis - CPGs are important to understand integration of sensory and motor behavior.
Guy - When I learned about CPGs I found it fascinating because I could make direct comparisons
between sports and playing the piano etc
Giacomo - The brain is not an isolated entity, it depends on the environment and on the body. You must
think of the entire system - the brain is embodied. For example, show demonstrations with robots that use
the same rules in different environments that generate different types of emergent behavior. Or show/teach
demos that use the same algorithm on different robotic platforms that generate different behaviors.
Steve - Show how you take many small elements, put them together, and they will demonstrate emergent
properties.
Guy - I thought something that’s particularly important is signal transduction and the ideas of conver-
gence and divergence in the brain. And I second the importance of the fact that neurons are embodied.
Barbara - Getting across the problem of going from perception to action is very important. This prob-
lem is only appreciated when you try to build something. Have to get across to students that there’s no
homunculus - how do you build something if you don’t have a homunculus?
Avis - It’s important to get across the concept that development of the nervous system requires input, and
that normal development can be disturbed by distorted input. An organism is not specified by its DNA alone.
Nature vs. nurture
Tobi - Context of perception within space and time. How the brain fills in information, for example,
visual illusions. People see and feel what they expect.
Avis - This is culturally dependent.
Barbara - Drawings and paintings - in order to do good drawings and painting you have to turn off
recognition - one way to do this is to turn a picture upside down. You can also train yourself to turn off color
constancy, so that you should be able to see blue in shadows outside. Edge detection is a property of the
visual system - you can see edges in a painting where they don’t really exist in nature.
Peter - Learning is a really important concept to get across - Hebbian learning structure of the brain.
Rodney - Collective computing is an important concept. You can imagine a game, (of course it could be
put in a kit so it could be sold) in which each child represents an element in a distributed system. You’d have
students acting as sensors and as actuators, and the could pass information between them via tokens. There
would be constraints on spatial processing - i.e., could pass to a limited number of students. At the end of
the game the system would come to a decision about something. Can get inspiration by looking at business
games.
146

Chapter 14
Personal Reports and Comments
The following is a list of individual reports and comments that we received from the worshop participants.
In the past years participant’s feedback have proven to be extremely useful, for improving various aspects of
the workshop. Also this years there are many constructive comments that we plan to take into account for
the organization of future workshops.
Sven Behnke
I am aware that although this was the first time to the Telluride workshop for me, it is not quite the first time that this workshop
has been held. In order to release my writer’s block with regard to this piece of prose to be conserved in the workshop’s report
for posterity, I diverted some of the scarce time the workshop leaves to its strained participants to browse diagonally through what
the poor souls that had faced this task before had come up with. What I found is that in all likelihood there is nothing original
left to be said about the workshop. The uniqueness of the workshop’s concept, the prime opportunities it offers for establishing
collaborations and all the other things that may possibly be said have been uttered already in all shades of eloquence and enthusiasm
that the human tongue and soul are capable of. Moreover, all the well-considered suggestions that I felt compelled to come up with
in order to make this excellent workshop even better (e.g., the plenary talks were too long or the chairs to uncomfortable, depending
on the perspective you would like to adopt) have been made before and seemingly went unheeded. Of course, I fully understand
that the sponsors of this workshop require these statements so that they can be really sure that there money was well spent well (it
certainly was). They even sent a representative this year to check what was done. - I understand that and here I am bowing to this
necessity for the greater glory of neuromorphic engineering. And in this very moment it dawns on me that indeed Rolf Mueller in
the Telluride Report 2001 made a suggestion, which was quite probably unique and by virtue of that (and only that) worthy to be
made: There should be a work-group that takes the cumulative personal reports of all the previous reports and designs an ELIZA-
like system (implemented in sub-threshold analog hardware, if you must) which can produce sincere, convincing, one of a kind,
turn-key personal reports by cunningly rearranging text fragments from this documentary of the fast ocean of human experience that
the narrow valley of Telluride encompasses. I understand perfectly well that not any reader or hardly any reader will have as much
fun reading this piece as I had adapting it and I sincerely apologize for having given in to this whim instead of performing my duty
without complaining. Take it as a rare manifestation of German humor, not necessarily worth archiving in the annals of the 2003
Telluride workshop of neuromorphic engineering.
Katsuyoshi TSUJITA
I was very encouraged to meet lots of researchers who have various background. This workshop was a good chance to have
communication with each other.
In this workshop, some researchers have theoretical control background, and I have a chance to discuss with them about our
study. However, they could not guess it actually. One of the reason is my bad english. But, the other is, it is more serious, it is not
clear ’why neuromorphic’ or ’why biologically inspired robotics.’
The idea common to various research field is that there are three items among the system, body, controller and environment.
¿From the point of view of conventional robotics or control theory, the environment around the system is a disturbance. They(or
we?) have considered the system is given and the prefered motion, that is, the reference is also given. In order to control the body,
147

Neuromorphic Engineering Workshop 2003
the controller is designed to follow the reference against the disturbance. It is ’a closed system.’ However, this type of controller
cannot have capability of adaptation, because the environment itself dynamically changes and as a result of behavior of the body,
it also changes. Imagine that the robot moves around on the rough terrain, the environment around the robot is no longer the same
as the initial one. Farthurmore, its physical property cannot be identified essentially. But creatures can adapt to the variance of the
environment and can generate its own references. So, the environment is not the disturbance for them but on which they strongly
interact and from which they generate its own references. In this meanings, in terms of creatures, body, controller and environment
are inside the loop and they form a system, not outside the environment but inside. This type of system is called a open system. And
the dynamic interactions among them generates the creature’s behavior and capability of adaptation. That must be a principle.
In the sense of robotics researcher, the detail of neuronal system is not so fascinating, however the principle is more and more
important, which many biologists or physiologists have made clear. Therefore, I feel it is better to make the lectures be introduction
of the principle. Of course, the results are important, but the next step to collaboration across the fields or breakthrough are coming
up only from the principle.
Matthias Oster
Coming from an institute that takes a large part in organizing the workshop, the year is divided in a pre- and post-Telluride section
with Telluride as an highlight in between. This raises high expectations to the workshop, when one gets the chance to attend for
the first time. And looking back after the workshop I have to say that the workshop has exceeded my expectations: Inside an
elementary school a crowded place of equipment, chips and computers, crawling biobots, state-of-the-art technology and discussing
scientists with their coffee mugs is created. All is hold together with quick-and-dirty-hack-solutions, hotglue and duck tape. As a
old-fashioned german engineer, one would expect this to be the most ineffective place to work. As a neuromorphic engineer, you
find that creativity is exactly born here. It comes from the interaction of excellent people that come together to play with their
toys and tools. From the one side it might look that they use the opportunity to spend three weeks in a beautiful mountainside
environment and build fancy robots. And this is exactly what the workshop is about. But: The tools that are played with here are
state-of-the-art developments, the principles discovered and used are on the edge of current science, the children are professionals.
Recurrent, time-continuous biologically-inspired networks in hardware belong to the most complex systems to understand (and we
are not even close to understand their biological counterparts), a nightmare to every conventional designer. The environment chosen
in this workshop seems to be the right approach to solve such complex problems, following a tradition that might have started
once in Caltech and has inspired a neuromorphic community for over a decade. For me this inspiration is the main output of the
workshop: motivation to reseach, to explore things even if they look simple at first view. Additionally, cooperations and contacts
that i hope will continue to evolve in the future. Additionally some new items in the ’neurmorph engineer’s toolbox’ of techniques,
chips to tweak, transistors to control. This year put more emphasis on the biological research with talks and discussions that took
place between the ’heads’ in the field. Maybe that has cut off much time for the workgroups, for making the projects ’run’. On the
other side, it strengthened the connection with the neuronal reseach and put down the biological background. It showed that the
’build-and-play’ approach explores current questions and interaction takes place in both directions. Let’s continue this tradition.
Peter Asaro
I found the Telluride Neuromorphic Workshop to be an incredibly stimulating and educational experience. I found myself working
on interesting projects and developing new skills completely outside of my previous experiences. My own background is in philos-
ophy of mind, history of cybernetics, and computer science. I had no previous experience in electronics, mechanical engineering
or microcontrollers. Yet, I found myself debugging circuit boards with an oscilloscope, building linear actuators out of Legos, and
programming different microcontrollers.
My main complaint is that the first week was completely overwhelming. While it was certainly nice to have the exceptional
Computational Neuroscience talks, they resulted in a completely exhausting schedule. On the other hand, having these in the first
week left more time for working on projects later on.
I was involved in four principle projects: Hero’s Robot, Navigation by Motion Parallax, Sensory fusion of Vision and Proprio-
ception, and Four-legged Gaits for Locomotion.
The first project I was involved in was the construction of a replica of Hero of Alexandria’s robot, using Legos and a weight.
As a fan of Hero, I found this to be very interesting and rewarding from both a historical and technological perspective. I was glad
to see that the organizers and fellow participants had a genuine interest and respect for the work that has preceded their own.
The most involved project for me was one using motion parallax from a scanning retina to gather depth information and drive
a Koala robot towards a target (Vision Chips Workgroup). My own contributions included a Lego linear actuator for the eye (not
148

Neuromorphic Engineering Workshop 2003
used) and coding and wiring an ATMEL microcontroller to drive a scan motor and control the aVLSI parallax chip, and send data
to the Koala. I also programmed the Koala to navigate based on the data collected.
The most intellectually challenging project involved fusing vision data with proprioceptive data in order to keep a walking robot
from tripping over obstacles. This was a much more daunting technological goal, but we had very interesting series of discussions
about various mathematical models for doing this. Ultimately, we did implement a very interesting, if highly simplified, robot model
which worked quite well using visual flow and reinforcement learning with eligibility traces.
Finally, we took the same quadruped robot as in the previous experiment, and explored various functions for controlling its gait.
We were able to make some nice analytic insights, but it would probably have been better to figure out how to implement a Central
Pattern Generator of some sort to control the gait instead.
I also greatly enjoyed the various discussion groups. I really liked the idea of the aVLSI Tutorial, though I found that it was
too time-consuming in relation to the other projects which I was in involved in. I think if I were able to return next year, I would
definitely pursue that tutorial.
Karl Pauwels
The Telluride workshop has been an amazing experience in many different ways. As a young researcher with limited experience,
the informal contacts with experienced people enabled me to learn to approach problems in a much more practical manner. My
experience so far was restricted to theoretical modelling. The availability of a large amount of advanced equipment allowed me to
build on this and to learn how to apply this knowledge to solve real-world problems. It is clear that the workshop has reached a
very advanced state over the years and most of the lectures are very high-level. Luckily there are already plenty of tutorial sessions
available but, in my opinion, these should be increased in the future. Another small criticism is the limited amount of workstations
for the large number of participants. I would advice future participants to bring their own laptop for smoother working. Finally,
it has been said many times before but it can’t be said enough: Telluride is an incredibly beautiful place, the nature is amazing,
the people are friendly, ... It is a wonderful place to spend three weeks and to endulge yourself completely in the neuromorphic
experience.
Chiara Bartolozzi
I found the workshop a really useful experience, the thing I appreciated more is the possibility to know and communicate with
highly qualified scientists, that usually aren’t so accessible to students, even during conferences and other meetings.
Working on projects that are related to my field of interest with people from others labs leads to a wider understanding of the
issue, and sometimes it carries new ideas for developing my research.
The only critic I could move to the workshop, if this can be a critics, is that it offers too many interesting possibilities, therefore
is difficult to concentrate on only one or two topics, but I also like this aspect of the workshop, because I had the possibility to have
an overview about most of the topics studied in neuroscience.
Christy Rogers
The workshop exceeded my expectations. I feel I have learned so much about the neuromorphic field and more importantly I
have seen how the different areas and disciplines all tie together. Even though the CNS lectures fell on the first week of the
workshop postponing the start of workgroups projects I was still able to learn a lot. It is this hands on component that is key.
Applying knowledge is the best way to really understand. Working with people from other labs provided an opportunity to learn
new techniques and methods. This element makes the workshop unique and definitely needs to be preserved. I have a lot of great
material to share with my lab when I return.
I have one suggestion that might be beneficial to future workshops. It is hard to absorb all of the information when the
presentation exceeds an hour. An hour presentation with half an hour slotted for questions might be a good balance. I know it can
be difficult and it takes more time to make a shorter talk, but a short talk full of the best stuff will be much bettered remembered.
Anyone interested in more details can have an offline discussion. I do however realize that if a lot of good discussion is stimulated
from the talks and that the time should not be so tightly adhered to that a good discussion has to be cut short. Flexibility is a good
when it is not an excuse for the speaker to not worry so much about making a concise presentation.
149

Neuromorphic Engineering Workshop 2003
Guy Rachmuth
I have thouroughly enjoyed the neuromorphic workshop. The range of projects and topics covered in three weeks were very
impressive. The strength of the workshop is the quality of people as leaders of workgroups, and their willingness to work with
novices to bring them up to speed quickly. Examples of implemented models of biological systems such as silicon cochleas, retinas,
motion, and I & F neuronal network were very impressive. I especially enjoyed learning about the AER from its inventors, and
realizing that it is crucial for my own research. An equally important aspect of the workshop was the ability to meet and get to
know the community of Neuromorphic Engineering. The contacts I have made in the workshop will hopefully help facilitate closer
interactions and collaborations.
I think that having a central fund that will provide electrical components and the like would have been helpful. It wasn’t always
clear when a DIGIKEY order was going to be placed and by the time you actually needed a part late in the third week, it was too
late. I would also try to get the workgroups steamlined, and meet every day the first week so that people are up and running in
the project by the second week. Overall, I would highly recommend people in my lab to come and experience the atmosphere of
Telluride and the great interactions that occur between the people.
R. Jacob Vogelstein
The 2003 Telluride Neuromorphic Engineering Workshop was the most incredible educational/professional experience I have ever
had. Almost all of the lectures were interesting, relevant, and designed for a student audience, which is not usually the case when
I attend seminars held at my university. Furthermore, the Telluride environment encouraged open discussion between students and
faculty, so I always felt comfortable asking questions both on-line (during the talks) and off.
Even more important than the lectures, however, are the social and professional relationships that are cultivated at Telluride.
Spending three weeks together in a small town forces everyone to get to know each other, both students and faculty alike, and I have
made friends with a number of students whom will soon become my professional peers. Additionally, I have gotten to know most
of the preeminent faculty in the field of neuromorphic engineering, something unimitable by any shorter or larger conference.
The workgroup projects that were developed at Telluride were extremely important for me—not necessarily because of their
scientific merit per se, but rather because they afforded the opportunity to work alongside with most of the leaders of this field and
their top students. Even working on my own chip in that environment was more productive than it would have been at home because
I was constantly engaging in discussions and debates about the relative merit of various design decisions and research directions.
This year in particular I was told that we had less time for projects than usual, due to the scheduling of Terry’s computational
neuroscience group, and while this certainly affected our project’s output, it did not seem to adversely affect the benefits of working
on projects with other people who bring new perspectives and new ideas.
I will definitely recommend the Telluride Neuromorphic Workshop to all of my colleagues back home, and I have only good
things to say about my time here. I am honored to have been selected for this year’s Workshop and I hope I am invited to attend
again, sometime in the not-so-distant future. Thank you very much for creating and maintaining this incredible experience.
Teresa Serrano Gotarredona
I think the workshop is nice because of the interaction between neuromorphic hardware people and neuroscientists. But, as a result,
the level of specialitized talks in both hardware and biology is kept low to make things understable to both kinds of people. This
generic talks are very useful. The engineers may find inspiration to find practical problems by emulating biology, and biologist may
use hardware as a model to understand who biology works. However, as an electrical engineering I miss in this workshop more
specialized talks on neuroinspired hardware. I would like talks explaining in more detail the circuits, its potential problems, the
circuit design techniques, the tolerance to mismatching, etc... I guess that neuroscientist may miss more specialized talks on their
subjects. To sum up, I think that the low level of specialization talks are practical and should be maintained. But, at some point,
you may organize parallel tracks on high specialized talks in hardware and in biology. The atmosphere of the worshop is very nice.
What helps people to interact and collaborate with each other. This is very positive and enrichful for people. Finally, I would like to
thank the organizers for their great effort and for the great opportinuty their give to the students to learn fascinating things.
150

Appendix A
Workshop participants
Telluride Participants
------------------------------------------------------------------------
Organizers:
* Avis Cohen <http://www.life.umd.edu/biology/cohenlab/>, University
of Maryland -
* Christof Koch, Caltech -
* Giacomo Indiveri <http://www.ini.unizh.ch/˜giacomo>, Institute of
Neuroinformatics -
* Ralph Etienne-Cummings <http://bach.ece.jhu.edu/˜etienne>, Johns
Hopkins University -
* Rodney Douglas <http://www.ini.unizh.ch>, Institute of
Neuroinformatics -
* Shihab Shamma, University of Mariland -
* Terrence Sejnowski <http://www.cnl.salk.edu/CNL>, Salk Institute -
CNL -
* Timmer Horiuchi <http://www.isr.umd.edu/˜timmer>, University of
Maryland -
Technical Personnel:
* Alice Mobaidin, INE -
* David Lawrence <http://www.ini.unizh.ch>, Institute of
Neuroinformatics -
* Elisabetta Chicca, Institute of Neuroinformatics -

* Jorg Conradt <http://www.ini.unizh.ch/˜conradt/>, Institute of
Neuroinformatics -
* Kathrin Aguilar Ruiz-Hofacker, Institute of Neuroinformatics -

* Matt Cheely <http://www.glue.umd.edu/˜mcheely>, University of
Maryland -
* Pam White, Maryland -
* Richard Blum, Georgia Institute of Technology -
* Richard Reeve, Institute of Neuroinformatics -
151

Neuromorphic Engineering Workshop 2003

Guest Speakers:
* Andre van Schaik <http://www.eelab.usyd.edu.au/andre/>, University
of Sydney -
* Ania Mitros <http://www.klab.caltech.edu/˜ania>, Caltech -

* Barbara Webb, -
* Bernabe Linares-Barranco <http://www.imse.cnm.es/˜bernabe>,
Instituto Microelectronica Sevilla -
* Bert Shi <http://www.ee.ust.hk/˜eebert>, Hong Kong University of
Science and Technology -
* Brian Scassellati <http://www.cs.yale.edu/˜scaz/>, Yale University
-
* Chuck Higgins <http://www.ece.arizona.edu/˜higgins>, University of
Arizona -
* David Anderson <http://www.ece.gatech.edu/research/labs/cadsp/>,
Georgia Tech -
* Hiroshi Kimura <http://www.kimura.is.uec.ac.jp>, The University of
Electro-Communications -
* Jochen Braun <http://www.pion.ac.uk/members/braun/braun.htm >,
Institute of Neuroscience, University of Plymouth -
* Kevan Martin, Institute of Neuroinformatics -
* Kwabena Boahen <http://www.neuroengineering.upenn.edu/>, UPenn -

* Laurent Itti <http://iLab.usc.edu>, USC -
* Malcolm Slaney <http://www.almaden.ibm.com/cs/people/malcolm>, IBM
Almaden Research Center -
* Mark W. Tilden <http://www.solarbotics.net www.wowwee.com>,
Institute for Physical Sciences/Hasbro Toys R&D -

* Mitra Hartmann, Caltech -
* Nici Schraudolph <http://n.schraudolph.org>, ETH Zurich -

* Orly Yadid-Pecht <http://www.ee.bgu.ac.il/˜Orly_lab>, Ben-Gurion
University -
* Robert Legenstein <http://www.igi.tugraz.at/legi>, Technische
Universitaet Graz -
* Steven Greenberg <http://www.icsi.berkeley.edu/˜steveng>,
International Computer Science Institute -
* Tim Pearce <http://www.le.ac.uk/eg/tcp1>, University of Leicester
-
* Tobi Delbruck <http://www.ini.unizh.ch/˜tobi>, Instiute of
Neuroinformatics -
* Tony Lewis
<http://www.iguana-robotics.com/people/tlewis/tlewis.html>, Iguana
Robotics, Inc. -
Computational Neuroscience Workshop:
* Barry Richmond, Laboratory of Neuropsychology, NIMH/NIH -

152

Neuromorphic Engineering Workshop 2003
* Bob Desimone, NIH -
* Bruce McNaughton, -
* Harvey Karten <http://www-cajal.ucsd.edu>, Dept. of Neurosciences,
University of California at San Diego -
* John Allman, Caltech -
* Michale Fee, MIT -
* Steve Zucker, Yale -
* Wolfram Schultz, -
Applicants:
* Chiara Bartolozzi, -
* Christy Rogers
<http://plaza.ufl.edu/clrogers/HomePage/_private/index.htm>,
University of Florida -
* Edgar Brown, Georgia Institute of Technology -
* Elizabeth Felton, University of Wisconsin - Madison -

* Guy Rachmuth, Harvard University, Div. of Engin. and applied
Science -
* Jacob Vogelstein, -
* Karl Pauwels, -
* Katsuyoshi Tsujita <http://space.kuaero.kyoto-u.ac.jp>, Kyoto
University -
* Kerstin Preuschoff, -
* Matthias Oster, Institute of Neuroinformatics, Zurich -

* Meihua Tai, Polytechnic University -
* Milutin Stanacevic, Johns Hopkins University -
* Nima Mesgarani, -
* Ning Qian <http://brahms.cpmc.columbia.edu>, Columbia University -

* Paschalis Veskos, -
* Paul Merolla, Upenn -
* Peter Asaro, -
* Ralf M. Philipp, Johns Hopkins Univ. -
* Reto Wyss <http://www.ini.unizh.ch/˜rwyss>, Institute of
Neuroinformatics, University/ETH Zuerich -
* Ryan Kier, University of Utah -
* Shane Migliore, -
* Sourabh Ravindran, Georgia Institute of Technology -

* Steven Kalik, Cornell University Weill Grad School Lab for
Visually Guided Behavior -
* Sven Behnke <http://www.icsi.berkeley.edu/˜behnke>, International
Computer Science Institute (ICSI) Berkeley -
* Teresa Serrano-Gotarredona, -
* Xiuxia Du, Washington University in St. Louis -

153

154

Neuromorphic Engineering Workshop 2003
Appendix B
Equipment and hardware facilities
Category
Description
Parts
avlsi
Agilent E3620A dual power supply
4
avlsi
BNC cables
30
avlsi
Class chip reference guide
2
avlsi
Class chips
5
avlsi
Fluke multimeters 4 ebrown ebrown gatech
4
avlsi
Function generators
2
avlsi
Keithley 6485 picoammeter
4
avlsi
MATLAB interface with GUIs
4
avlsi
Potboxes
5
avlsi
Triax cables with 3-lug connectors
4
computer
1.5GHz Dell Pentium 4 Computer (256MB, 20GB), with IO cards
4
computer
computer (1.1ghz/512mb/60g)
1
computer
computer (1.2gHz/1g/40g)
1
computer
computer (1.2ghz/1g/40g)
1
computer
computer (1.2ghz/512mb/60g)
1
computer
computer(2ghz/256mb/40gb)
2
computer
computer (900Mhz/256mb/8g)
1
computer
Wireless (802.11b) hub
1
computer
HP 4500 Color printer
1
computer
30GB unformatted hard disks
5
computer
8-port ethernet hubs
7
computer
Ethernet cables (14 ft.)
10
computer
Ethernet cables (Twisted pair)
50
computer
Null modems (25pin)
2
computer
Serial cables (25pin plug to 25pin plug)
2
computer
Serial cables (9pin socket to 25pin plug)
7
computer
Serial cables (9pin socket to 9pin plug)
2
computer
Serial cables (9pin socket to 9pin plug) (at least 4.5m)
2
demo
Oscilloscope TDS 3054B
1
demo
1D tracking chip
4
demo
Silicon array of I&F neurons (alavlsi1 chip)
4
demo
Silicon retina / silicon neuron board / CAM board
1
demo
AER board
1
demo
Batmobile: Microchipotera on wheels!
1
continued on next page
155

Neuromorphic Engineering Workshop 2003
Category
Description
Parts
demo
Fluke 70 Series multimeter
3
demo
Function generator HP33120A
2
demo
Tektronix portable oscilloscope
2
demo
HP3600 Series DC power supplies
3
demo
Logic analyzer TLA 604 w/ 68 channel module
1
demo
Oscilloscope Fluke PM3394
1
demo
Oscilloscope TDS420
1
demo
Oscilloscope and two PCMCIA cards, property of JHU
1
demo
Oscilloscopes w/GPIB interface
2
demo
PCB for alavlsi1
2
demo
PCI-AER board
1
demo
Power Supply BK Precision 1635A
1
demo
Power Supply BK Precision 1711
1
demo
Power Supply PS282
1
demo
Rechargeable batteries
8
demo
Running legs using CPG chips
2
demo
U.S. power cables for 110VAC-compatible non-U.S. electronic devices
10
demo
Variable power supplies
3
demo
Various vision chips
1
demo
Serial cable
1
demo
Adapters (2 banana plugs to BNC socket)
16
demo
Adapters (2 banana plugs to BNC)
2
demo
Adapters (RCA socket to BNC plug) for Khepera video cameras
3
demo
BNC T-junctions (1 plug & 2 sockets)
8
demo
BNC cables
5
demo
Clamp
3
demo
GPIB cables
12
misc
Bench multimeter
1
misc
Drill bit set
1
misc
Electric drill
1
misc
Glue gun
1
misc
Measuring tape
1
misc
NI Data acquisition card NI-DAQ6036E
1
misc
PIC programmer w/UV eraser and 16F877 PICs
1
misc
Power strip (Swiss)
2
misc
Power strips
30
misc
Precision screwdrivers
2
misc
Rechargeable Batteries (9V Blocks)
7
misc
Rechargeable Batteries (AA Mignon)
30
misc
Screwdrivers
5
misc
Soldering Irons with stands
3
misc
Toolbox (hammer, pliers, etc)
1
misc
Trimmers (pot tweakers)
10
misc
UV EPROM eraser
1
misc
VCRs
3
continued on next page
156

Neuromorphic Engineering Workshop 2003
Category
Description
Parts
misc
Video/VGA projectors
2
misc
Vise
2
misc
Wire Wrap tools
4
misc
Wire cutters
8
robots
White Koala from K-Team
1
robots
Silver Koala
1
robots
Kheperas from K-Team
3
robots
Gen I/O turrets from K-Team
5
robots
Gripper turret for Khepera
1
robots
K213 - linear vision
1
robots
Hauppauge WinTV framegrabbers for Robot Group
4
robots
koala battery + charger + external supply
1
robots
12V 3.2Ah rechargeable batteries for Koala PanTilt Unit
2
robots
Direct Power supply for silver Koala
1
robots
Direct Power supply for white Koala
1
robots
Koala battery chargers (110V)
2
robots
Koala battery chargers (110V) + spare battery
1
robots
koala battery for silver Koala
1
robots
koala battery for white Koala
1
robots
Lego MindStorms kits
5
robots
Lego VisionCommand camera modules
2
robots
Turret with Camera for Kheperas from K-Team
1
robots
Rotating contact for Khepera
2
robots
Whisker Demo Setup
1
software
Chipmunk tools
1
software
Koala-gcc cross compiler
1
software
Mathworks Matlab-6.5 (Linux)
20
software
Microsoft Office 2000
2
software
Tanner Tools Design Pro Dongles
4
software
USB dongle for ICC AVR C compiler (Atmel uC)
1
software
Win98
2
supply
Battery Charger
1
supply
Solder wire rolls
3
supply
Stereo PanTilt System for Koala Robot
1
supply
Wire Wrap wire roll
1
supply
various electronic components: resistors, capacitors, cables, test-PCBs, etc.
1
supply
110VAC − > DC Variable voltage converters (500mA)
2
supply
Voltage converter 110V to 220V with 220V extension cord
1
supply
Battery charger (AA Mignon & 9V Block)
2
supply
Battery charger HiMH / NiCd (AA / AAA / 9V E-block)
1
supply
Extension cords
20
157

Appendix C
Workshop Announcement
This announcement was posted on 1/12/2001 to various mailing lists and to our dedicated Web-Site.
------------------------------------------------------------------------
NEUROMORPHIC ENGINEERING WORKSHOP
Sunday, JUNE 29 - Saturday, JULY 19, 2003
TELLURIDE, COLORADO
http://www.ini.unizh.ch/telluride/
------------------------------------------------------------------------
Avis COHEN (University of Maryland)
Rodney DOUGLAS (Institute of Neuroinformatics,
UNI/ETH Zurich, Switzerland)
Ralph ETIENNE-CUMMINGS (University of Maryland)
Timmer HORIUCHI (University of Maryland)
Giacomo INDIVERI (Institute of Neuroinformatics,
UNI/ETH Zurich, Switzerland)
Christof KOCH (California Institute of Technology)
Terrence SEJNOWSKI (Salk Institute and UCSD)
Shihab SHAMMA (University of Maryland)
------------------------------------------------------------------------
We invite applications for the annual three week "Telluride Workshop
and Summer School on Neuromorphic Engineering" that will be held in
Telluride, Colorado from Sunday, June 29 to Saturday, July 19, 2003.
The application deadline is FRIDAY, MARCH 14, and application
instructions are described at the bottom of this document.
Like each of these workshops that have taken place since 1994, the
2002 Workshop and Summer School on Neuromorphic Engineering, sponsored
by the National Science Foundation, the Whitaker Foundation, the
Office of Naval Research, the Defence Advanced Research Projects
Agency, and by the Center for Neuromorphic Systems Engineering at the
158

Neuromorphic Engineering Workshop 2003
California Institute of Technology, was an exciting event and a great
success.
We strongly encourage interested parties to browse through the
previous workshop web pages located at:
http://www.ini.unizh.ch/telluride
For a discussion of the underlying science and technology and a report
on the 2001 workshop, see the September 20, 2001 issue of "The
Economist":
http://www.economist.com/science/tq/displayStory.cfm?Story_ID=779503
GOALS:
Carver Mead introduced the term "Neuromorphic Engineering" for a new
field based on the design and fabrication of artificial neural
systems, such as vision systems, head-eye systems, and roving robots,
whose architecture and design principles are based on those of
biological nervous systems.
The goal of this workshop is to bring
together young investigators and more established researchers from
academia with their counterparts in industry and national
laboratories, working on both neurobiological as well as engineering
aspects of sensory systems and sensory-motor integration.
The focus
of the workshop will be on active participation, with demonstration
systems and hands on experience for all participants.
Neuromorphic
engineering has a wide range of applications from nonlinear adaptive
control of complex systems to the design of smart sensors, vision,
speech understanding and robotics.
Many of the fundamental principles
in this field, such as the use of learning methods and the design of
parallel hardware (with an emphasis on analog and asynchronous digital
VLSI), are inspired by biological systems.
However, existing
applications are modest and the challenge of scaling up from small
artificial neural networks and designing completely autonomous systems
at the levels achieved by biological systems lies ahead.
The
assumption underlying this three week workshop is that the next
generation of neuromorphic systems would benefit from closer attention
to the principles found through experimental and theoretical studies
of real biological nervous systems as whole systems.
FORMAT:
The three week summer school will include background lectures on
systems neuroscience (in particular learning, oculo-motor and other
motor systems and attention), practical tutorials on analog VLSI
design, small mobile robots (Koalas, Kheperas, LEGO robots, and
biobugs), hands-on projects, and special interest groups.
Participants are required to take part and possibly complete at least
one of the projects proposed.
They are furthermore encouraged to
become involved in as many of the other activities proposed as
interest and time allow.
There will be two lectures in the morning
that cover issues that are important to the community in general.
Because of the diverse range of backgrounds among the participants,
the majority of these lectures will be tutorials, rather than detailed
reports of current research.
These lectures will be given by invited
speakers. Participants will be free to explore and play with whatever
159

Neuromorphic Engineering Workshop 2003
they choose in the afternoon.
Projects and interest groups meet in
the late afternoons, and after dinner.
In the early afternoon there
will be tutorial on a wide spectrum of topics, including analog VLSI,
mobile robotics, auditory systems, central-pattern-generators,
selective attention mechanisms, etc.
Projects that are carried out during the
workshop will be centered in
a number of working groups, including:
* active
vision
* audition
* motor control
* central
pattern generator
* robotics
* swarm robotics
* multichip communication
* analog VLSI
* learning
The active perception project group will emphasize vision and human
sensory-motor coordination.
Issues to be covered will include spatial
localization and constancy, attention, motor planning, eye movements,
and the use of visual motion information for motor control.
The central pattern generator group will focus on small walking and
undulating robots.
It will look at characteristics and sources of
parts for building robots, play with working examples of legged and
segmented robots, and discuss CPG’s and theories of nonlinear
oscillators for locomotion.
It will also explore the use of simple
analog VLSI sensors for autonomous robots.
The robotics group will use rovers and working digital vision boards
as well as other possible sensors to investigate issues of
sensorimotor integration, navigation and learning.
The audition group aims to develop biologically plausible algorithms
and aVLSI implementations of specific auditory tasks such as source
localization and tracking, and sound pattern recognition. Projects
will be integrated with visual and motor tasks in the context of a
robot platform.
The multichip communication project group will use existing interchip
communication interfaces to program small networks of artificial
neurons to exhibit particular behaviors such as amplification,
oscillation, and associative memory.
Issues in multichip
communication will be discussed.
This year we will also have *200* biobugs, kindly donated by the
WowWee Toys division of Hasbro in Hong Kong.
B.I.O.-Bugs, short for
Bio-mechanical Integrated Organisms, are autonomous creatures, each
measuring about one foot and weighing about one pound
(www.wowwee.com/biobugs/biointerface.html).
This will permit us to
carry out experiments in collective/swarm robotics.
LOCATION AND ARRANGEMENTS:
160

Neuromorphic Engineering Workshop 2003
The summer school will take place in the small town of Telluride, 9000
feet high in Southwest Colorado, about 6 hours drive away from Denver
(350 miles).
Great Lakes Aviation and America West Express airlines
provide daily flights directly into Telluride.
All facilities within
the beautifully renovated public school building are fully accessible
to participants with disabilities.
Participants will be housed in ski
condominiums, within walking distance of the school. Participants are
expected to share condominiums.
The workshop is intended to be very informal and hands-on.
Participants are not required to have had previous experience in
analog VLSI circuit design, computational or machine vision, systems
level neurophysiology or modeling the brain at the systems level.
However, we strongly encourage active researchers with relevant
backgrounds from academia, industry and national laboratories to
apply, in particular if they are prepared to work on specific
projects, talk about their own work or bring demonstrations to
Telluride (e.g.
robots, chips, software).
Internet access will be
provided. Technical staff present throughout the workshops will assist
with software and hardware issues.
We will have a network of PCs
running LINUX and Microsoft Windows for the workshop projects. We also
plan to provide wireless internet access and encourage participants to
bring along their personal laptop.
No cars are required.
Given the small size of the town, we recommend
that you do NOT rent a car.
Bring hiking boots, warm clothes, rain
gear and a backpack, since Telluride is surrounded by beautiful
mountains. Unless otherwise arranged with one of the organizers, we
expect participants to stay for the entire duration of this three week
workshop.
FINANCIAL ARRANGEMENT:
Notification of acceptances will be mailed out around mid April 2003.
Participants are expected to pay a $275.00 workshop fee at that time
in order to reserve a place in the workshop.
The cost of a shared
condominium will be covered for all academic participants but upgrades
to a private room will cost extra.
Participants from National
Laboratories and Industry are expected to pay for these condominiums.
Travel reimbursement of up to $500 for US domestic travel and up to
$800 for overseas travel will be possible if financial help is needed
(Please specify on the application).
HOW TO APPLY:
Applicants should be at the level of graduate students or above (i.e.,
postdoctoral fellows, faculty, research and engineering staff and the
equivalent positions in industry and national laboratories).
We
actively encourage qualified women and minority candidates to apply.
Application should include:
* First name, Last name, Affiliation, valid e-mail address.
* Curriculum Vitae.
* One page summary of background and interests relevant to the
161

Neuromorphic Engineering Workshop 2003
workshop.
* Description of demonstrations that could be brought to the
workshop.
* Two letters of recommendation
Complete applications should be sent to:
Terrence Sejnowski
The Salk Institute
10010 North Torrey Pines Road
San Diego, CA 92037
e-mail:
FAX:
APPLICATION DEADLINE: MARCH 14, 2003
162

Appendix D
GNU Free Documentation License
Version 1.1, March 2000
Copyright
c
2000 Free Software Foundation, Inc.
59 Temple Place, Suite 330, Boston, MA 02 USA
Everyone is permitted to copy and distribute verbatim copies of this license document,
but changing it is not allowed.
Preamble
The purpose of this License is to make a manual, textbook, or other written document
‘‘free’’ in the sense of freedom:
to assure everyone the effective freedom to copy and
redistribute it, with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way to get credit
for their work, while not being considered responsible for modifications made by
others.
This License is a kind of ‘‘copyleft’’, which means that derivative works of the
document must themselves be free in the same sense.
It complements the GNU General
Public License, which is a copyleft license designed for free software.
We have designed this License in order to use it for manuals for free software, because
free software needs free documentation:
a free program should come with manuals
providing the same freedoms that the software does.
But this License is not limited to
software manuals; it can be used for any textual work, regardless of subject matter or
whether it is published as a printed book.
We recommend this License principally for
works whose purpose is instruction or reference.
Applicability and Definitions
This License applies to any manual or other work that contains a notice placed by the
copyright holder saying it can be distributed under the terms of this License.
The
‘‘Document’’, below, refers to any such manual or work.
Any member of the public is a
licensee, and is addressed as ‘‘you’’.
A ‘‘Modified Version’’ of the Document means any work containing the Document or a
portion of it, either copied verbatim, or with modifications and/or translated into
another language.
A ‘‘Secondary Section’’ is a named appendix or a front-matter section of the Document
that deals exclusively with the relationship of the publishers or authors of the
Document to the Document’s overall subject (or to related matters) and contains nothing
that could fall directly within that overall subject.
(For example, if the Document is
in part a textbook of mathematics, a Secondary Section may not explain any
163

Neuromorphic Engineering Workshop 2003
mathematics.)
The relationship could be a matter of historical connection with the
subject or with related matters, or of legal, commercial, philosophical, ethical or
political position regarding them.
The ‘‘Invariant Sections’’ are certain Secondary Sections whose titles are designated,
as being those of Invariant Sections, in the notice that says that the Document is
released under this License.
The ‘‘Cover Texts’’ are certain short passages of text that are listed, as Front-Cover
Texts or Back-Cover Texts, in the notice that says that the Document is released under
this License.
A ‘‘Transparent’’ copy of the Document means a machine-readable copy, represented in a
format whose specification is available to the general public, whose contents can be
viewed and edited directly and straightforwardly with generic text editors or (for
images composed of pixels) generic paint programs or (for drawings) some widely
available drawing editor, and that is suitable for input to text formatters or for
automatic translation to a variety of formats suitable for input to text formatters.
A
copy made in an otherwise Transparent file format whose markup has been designed to
thwart or discourage subsequent modification by readers is not Transparent.
A copy
that is not ‘‘Transparent’’ is called ‘‘Opaque’’.
Examples of suitable formats for Transparent copies include plain ASCII without markup,
Texinfo input format, LATEX input format, SGML or XML using a publicly available DTD, and
standard-conforming simple HTML designed for human modification.
Opaque formats
include PostScript, PDF, proprietary formats that can be read and edited only by
proprietary word processors, SGML or XML for which the DTD and/or processing tools are
not generally available, and the machine-generated HTML produced by some word
processors for output purposes only.
The ‘‘Title Page’’ means, for a printed book, the title page itself, plus such
following pages as are needed to hold, legibly, the material this License requires to
appear in the title page.
For works in formats which do not have any title page as
such, ‘‘Title Page’’ means the text near the most prominent appearance of the work’s
title, preceding the beginning of the body of the text.
Verbatim Copying
You may copy and distribute the Document in any medium, either commercially or
noncommercially, provided that this License, the copyright notices, and the license
notice saying this License applies to the Document are reproduced in all copies, and
that you add no other conditions whatsoever to those of this License.
You may not use
technical measures to obstruct or control the reading or further copying of the copies
you make or distribute.
However, you may accept compensation in exchange for copies.
If you distribute a large enough number of copies you must also follow the conditions
in section 3.
You may also lend copies, under the same conditions stated above, and you may publicly
display copies.
Copying in Quantity
If you publish printed copies of the Document numbering more than 100, and the
Document’s license notice requires Cover Texts, you must enclose the copies in covers
that carry, clearly and legibly, all these Cover Texts:
Front-Cover Texts on the front
cover, and Back-Cover Texts on the back cover.
Both covers must also clearly and
legibly identify you as the publisher of these copies.
The front cover must present
the full title with all words of the title equally prominent and visible.
You may add
other material on the covers in addition.
Copying with changes limited to the covers,
164

Neuromorphic Engineering Workshop 2003
as long as they preserve the title of the Document and satisfy these conditions, can be
treated as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit legibly, you should
put the first ones listed (as many as fit reasonably) on the actual cover, and continue
the rest onto adjacent pages.
If you publish or distribute Opaque copies of the Document numbering more than 100, you
must either include a machine-readable Transparent copy along with each Opaque copy, or
state in or with each Opaque copy a publicly-accessible computer-network location
containing a complete Transparent copy of the Document, free of added material, which
the general network-using public has access to download anonymously at no charge using
public-standard network protocols.
If you use the latter option, you must take
reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to
ensure that this Transparent copy will remain thus accessible at the stated location
until at least one year after the last time you distribute an Opaque copy (directly or
through your agents or retailers) of that edition to the public.
It is requested, but not required, that you contact the authors of the Document well
before redistributing any large number of copies, to give them a chance to provide you
with an updated version of the Document.
Modifications
You may copy and distribute a Modified Version of the Document under the conditions of
sections 2 and 3 above, provided that you release the Modified Version under precisely
this License, with the Modified Version filling the role of the Document, thus
licensing distribution and modification of the Modified Version to whoever possesses a
copy of it.
In addition, you must do these things in the Modified Version:
• Use in the Title Page (and on the covers, if any) a title distinct from that of
the Document, and from those of previous versions (which should, if there were
any, be listed in the History section of the Document).
You may use the same
title as a previous version if the original publisher of that version gives
permission.
• List on the Title Page, as authors, one or more persons or entities responsible
for authorship of the modifications in the Modified Version, together with at
least five of the principal authors of the Document (all of its principal authors,
if it has less than five).
• State on the Title page the name of the publisher of the Modified Version, as the
publisher.
• Preserve all the copyright notices of the Document.
• Add an appropriate copyright notice for your modifications adjacent to the other
copyright notices.
• Include, immediately after the copyright notices, a license notice giving the
public permission to use the Modified Version under the terms of this License, in
the form shown in the Addendum below.
• Preserve in that license notice the full lists of Invariant Sections and required
Cover Texts given in the Document’s license notice.
• Include an unaltered copy of this License.
• Preserve the section entitled ‘‘History’’, and its title, and add to it an item
stating at least the title, year, new authors, and publisher of the Modified
Version as given on the Title Page.
If there is no section entitled ‘‘History’’
in the Document, create one stating the title, year, authors, and publisher of the
Document as given on its Title Page, then add an item describing the Modified
Version as stated in the previous sentence.
165

Neuromorphic Engineering Workshop 2003
• Preserve the network location, if any, given in the Document for public access to
a Transparent copy of the Document, and likewise the network locations given in
the Document for previous versions it was based on.
These may be placed in the
‘‘History’’ section.
You may omit a network location for a work that was
published at least four years before the Document itself, or if the original
publisher of the version it refers to gives permission.
• In any section entitled ‘‘Acknowledgements’’ or ‘‘Dedications’’, preserve the
section’s title, and preserve in the section all the substance and tone of each of
the contributor acknowledgements and/or dedications given therein.
• Preserve all the Invariant Sections of the Document, unaltered in their text and
in their titles.
Section numbers or the equivalent are not considered part of the
section titles.
• Delete any section entitled ‘‘Endorsements’’.
Such a section may not be included
in the Modified Version.
• Do not retitle any existing section as ‘‘Endorsements’’ or to conflict in title
with any Invariant Section.
If the Modified Version includes new front-matter sections or appendices that qualify
as Secondary Sections and contain no material copied from the Document, you may at your
option designate some or all of these sections as invariant.
To do this, add their
titles to the list of Invariant Sections in the Modified Version’s license notice.
These titles must be distinct from any other section titles.
You may add a section entitled ‘‘Endorsements’’, provided it contains nothing but
endorsements of your Modified Version by various parties -- for example, statements of
peer review or that the text has been approved by an organization as the authoritative
definition of a standard.
You may add a passage of up to five words as a Front-Cover Text, and a passage of up to
25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified
Version.
Only one passage of Front-Cover Text and one of Back-Cover Text may be added
by (or through arrangements made by) any one entity.
If the Document already includes
a cover text for the same cover, previously added by you or by arrangement made by the
same entity you are acting on behalf of, you may not add another; but you may replace
the old one, on explicit permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License give permission
to use their names for publicity for or to assert or imply endorsement of any Modified
Version.
Combining Documents
You may combine the Document with other documents released under this License, under
the terms defined in section 4 above for modified versions, provided that you include
in the combination all of the Invariant Sections of all of the original documents,
unmodified, and list them all as Invariant Sections of your combined work in its
license notice.
The combined work need only contain one copy of this License, and multiple identical
Invariant Sections may be replaced with a single copy.
If there are multiple Invariant
Sections with the same name but different contents, make the title of each such section
unique by adding at the end of it, in parentheses, the name of the original author or
publisher of that section if known, or else a unique number.
Make the same adjustment
to the section titles in the list of Invariant Sections in the license notice of the
combined work.
In the combination, you must combine any sections entitled ‘‘History’’ in the various
original documents, forming one section entitled ‘‘History’’; likewise combine any
sections entitled ‘‘Acknowledgements’’, and any sections entitled ‘‘Dedications’’.
You
must delete all sections entitled ‘‘Endorsements.’’
166

Neuromorphic Engineering Workshop 2003
Collections of Documents
You may make a collection consisting of the Document and other documents released under
this License, and replace the individual copies of this License in the various
documents with a single copy that is included in the collection, provided that you
follow the rules of this License for verbatim copying of each of the documents in all
other respects.
You may extract a single document from such a collection, and distribute it
individually under this License, provided you insert a copy of this License into the
extracted document, and follow this License in all other respects regarding verbatim
copying of that document.
Aggregation With Independent Works
A compilation of the Document or its derivatives with other separate and independent
documents or works, in or on a volume of a storage or distribution medium, does not as
a whole count as a Modified Version of the Document, provided no compilation copyright
is claimed for the compilation.
Such a compilation is called an ‘‘aggregate’’, and
this License does not apply to the other self-contained works thus compiled with the
Document, on account of their being thus compiled, if they are not themselves
derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these copies of the
Document, then if the Document is less than one quarter of the entire aggregate, the
Document’s Cover Texts may be placed on covers that surround only the Document within
the aggregate.
Otherwise they must appear on covers around the whole aggregate.
Translation
Translation is considered a kind of modification, so you may distribute translations of
the Document under the terms of section 4.
Replacing Invariant Sections with
translations requires special permission from their copyright holders, but you may
include translations of some or all Invariant Sections in addition to the original
versions of these Invariant Sections.
You may include a translation of this License
provided that you also include the original English version of this License.
In case
of a disagreement between the translation and the original English version of this
License, the original English version will prevail.
Termination
You may not copy, modify, sublicense, or distribute the Document except as expressly
provided for under this License.
Any other attempt to copy, modify, sublicense or
distribute the Document is void, and will automatically terminate your rights under
this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such parties remain in
full compliance.
Future Revisions of This License
The Free Software Foundation may publish new, revised versions of the GNU Free
Documentation License from time to time.
Such new versions will be similar in spirit
to the present version, but may differ in detail to address new problems or concerns.
See http://www.gnu.org/copyleft/.
167

Neuromorphic Engineering Workshop 2003
Each version of the License is given a distinguishing version number.
If the Document
specifies that a particular numbered version of this License "or any later version"
applies to it, you have the option of following the terms and conditions either of that
specified version or of any later version that has been published (not as a draft) by
the Free Software Foundation.
If the Document does not specify a version number of
this License, you may choose any version ever published (not as a draft) by the Free
Software Foundation.
ADDENDUM: How to use this License for your documents
To use this License in a document you have written, include a copy of the License in
the document and put the following copyright and license notices just after the title
page:
Copyright
c
YEAR YOUR NAME. Permission is granted to copy, distribute and/or
modify this document under the terms of the GNU Free Documentation License,
Version 1.1 or any later version published by the Free Software Foundation;
with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover
Texts being LIST, and with the Back-Cover Texts being LIST. A copy of the
license is included in the section entitled ‘‘GNU Free Documentation
License’’.
If you have no Invariant Sections, write ‘‘with no Invariant Sections’’ instead of
saying which ones are invariant.
If you have no Front-Cover Texts, write ‘‘no
Front-Cover Texts’’ instead of ‘‘Front-Cover Texts being LIST’’; likewise for
Back-Cover Texts.
If your document contains nontrivial examples of program code, we recommend releasing
these examples in parallel under your choice of free software license, such as the GNU
General Public License, to permit their use in free software.
168

Bibliography
[1] R. Etienne-Cummings, J. Van der Spiegel, and P. Mueller, “Hardware implementation of a
visual-motion pixel using oriented spatiotemporal neural filters,” IEEE Transactions on Circuits and
Systems II
, vol. 46, pp. 1121–1136, September 1999.
[2] E. Culurciello, R. Etienne-Cummings, and K. A. Boahen, “A biomorphic digital image sensor,” IEEE
Journal of Solid-State Circuits, vol. 38, pp. 281–294, February 2003.
[3] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, “Probabilistic synaptic weighting in a
reconfigurable network of vlsi integrate-and-fire neurons,” Neural Networks, vol. 14, pp. 781–793, July
2001.
[4] M. Mahowald, An analog VLSI system for stereoscopic vision. Boston: Kluwer Academic Publishers,
1994.
169