Report to the National Science Foundation:
WORKSHOP ON NEUROMORPHIC ENGINEERING
Telluride, CO
Monday, June 29 to Sunday, July 19, 1998
Shihab Shamma
Avis Cohen
Tim Horiuchi
Giacomo Indiveri
with
R. Douglas
C. Koch
T. Sejnowski

Contents
1
Summary
4
1.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.2
Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
1.3
Future Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
2
Telluride 1998: the details
6
2.1
Applications to Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
2.2
Funding and Commercial Support . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.3
Local Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7
2.4
Setup and Computer Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . .
8
2.5
Workshop Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9
3
Tutorials and Project Workgroups
12
3.1
VLSI Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3.2
Floating Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
3.3
Interchip communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
3.4
Project Workgroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
4
Neuromorphic Robots
14
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
4.1.1
Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
4.1.2
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
4.2
Learning: on-line learning using the Khepera robot . . . . . . . . . . . . . . . . .
17
4.2.1
The Network Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
4.2.2
Work done in Telluride . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
4.2.3
Conclusion and future work . . . . . . . . . . . . . . . . . . . . . . . . .
20
4.3
Olfaction: modeling and experimenting with an artificial nose
. . . . . . . . . . .
20
4.3.1
Biological Relevance . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
4.3.2
Measurement Set-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
4.3.3
Combining TNose and Xmorph . . . . . . . . . . . . . . . . . . . . . . .
24
4.3.4
Olfactory Bulb Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
4.3.5
Cell types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
4.3.6
Synapse types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
4.3.7
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
25
4.3.8
Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . .
26
4.4
Audition: auditory localization using the Koala robot . . . . . . . . . . . . . . . .
27
4.5
Vision: view based navigation using the Khepera robot . . . . . . . . . . . . . . .
29
1

4.5.1
Input Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
30
4.5.2
Neural Network
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
4.5.3
Results to Date . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
4.5.4
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33
4.6
Vision: interfacing a Silicon Retina to a Koala Robot . . . . . . . . . . . . . . . .
33
4.7
Neuromorphic Flying Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
4.8
Optomotor response with an aerodynamic actuator
. . . . . . . . . . . . . . . . .
34
4.9
Visual tracking using a silicon retina on a pan-tilt system . . . . . . . . . . . . . .
36
4.9.1
Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
4.9.2
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
4.9.3
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4.10 Optomotor Response of a Koala Robot with an aVLSI Motion Chip . . . . . . . .
37
4.11 Locomotion of segmented lamprey-like robots . . . . . . . . . . . . . . . . . . . .
38
5
Auditory Processing
42
5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
42
5.2
Peripheral Auditory Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
5.2.1
Analysis of informative features in natural acoustic signals . . . . . . . . .
43
5.2.2
Auditory processing with electronic cochlea chips
. . . . . . . . . . . . .
43
5.2.3
Hardware realization of signal normalization, noise reduction, and feature
enhancement on the output of a cochlear chip . . . . . . . . . . . . . . . .
45
5.3
Auditory Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
5.3.1
Computation of sound-source lateral angle by a binaural cross-correlation
network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
46
5.3.2
Computation of sound-source lateral angle by a stereausis network . . . . .
47
5.4
Acoustic Pattern Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
5.4.1
Identification of speech in real noisy environments using a model of audi-
tory cortical processing . . . . . . . . . . . . . . . . . . . . . . . . . . . .
49
5.4.2
Review of current prospects and limitations in speech recognition systems .
50
5.5
Collaborative Efforts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
52
5.5.1
Production of spectro-temporal receptive fields using projective-field map-
ping of address-events from a 1-D sender array . . . . . . . . . . . . . . .
52
5.5.2
Directing a binaural robot towards a sound-emitting target . . . . . . . . .
52
5.5.3
An auditory complement to visual saliency . . . . . . . . . . . . . . . . .
53
5.5.4
Making Pinna Casts
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
5.6
Retro- and Pro-spectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
6
Address Event Representation
55
6.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55
6.2
AER-based 1-D Stereo Work Group . . . . . . . . . . . . . . . . . . . . . . . . .
55
6.3
Line-Following Robot Using an Address-Event Optical Sensor . . . . . . . . . . .
56
6.4
2D Address-Event Senders and Receivers: Implementing Direction-Selectivity and
Orientation-Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
58
6.5
One-Dimensional AER-based Remapping Project . . . . . . . . . . . . . . . . . .
59
6.6
Simulating AER-Cochlear Inputs With the 1-D AER Vision Chip . . . . . . . . . .
59
6.7
Serial Address-Event Representation . . . . . . . . . . . . . . . . . . . . . . . . .
61
6.7.1
SAER Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
6.7.2
Pros and Cons of SAER . . . . . . . . . . . . . . . . . . . . . . . . . . .
63
2

6.7.3
SAER cabling standard . . . . . . . . . . . . . . . . . . . . . . . . . . . .
65
6.7.4
Project 1: The SAER computer interface board . . . . . . . . . . . . . . .
65
6.7.5
Project 2: The SAER universal routing block . . . . . . . . . . . . . . . .
66
6.7.6
Project 3: The SAER-to-AER converter blocks . . . . . . . . . . . . . . .
66
6.8
Serial AER Merger/Splitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
6.9
FPGA Implementation of a Spike-based Displacement Integrator . . . . . . . . . .
68
6.9.1
The Displacement Integrator . . . . . . . . . . . . . . . . . . . . . . . . .
68
6.9.2
Digital Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
6.9.3
Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
6.9.4
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
7
Discussion Groups
74
7.1
The What is computation? Discussion group . . . . . . . . . . . . . . . . . . . . .
74
7.2
Neuromorphic Systems for Prosthetics . . . . . . . . . . . . . . . . . . . . . . . .
75
7.2.1
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
7.2.2
Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
76
7.2.3
Prosthetics for sensorineural and sensorimotor applications . . . . . . . . .
77
7.2.4
Classifying prostheses techniques . . . . . . . . . . . . . . . . . . . . . .
78
7.2.5
What can neuromorphic systems offer . . . . . . . . . . . . . . . . . . . .
78
7.2.6
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
8
Personal Reports and Comments
80
A Participants of the 1998 Workshop
91
B Hardware Facilities of the 1998 Workshop
93
C Workshop Announcement
95
3

Chapter 1
Summary
1.1
Introduction
Neuromorphic engineering is a young field that is based on the design and fabrication of artificial
neural systems, such as vision and hearing chips, head-eye systems, and autonomous robots, whose
architecture and design principles are based on those of biological nervous systems. The goal of
our annual workshop is to bring together young investigators and more established researchers from
academia with their counterparts in industry and national laboratories, including individuals work-
ing on both neurobiological as well as engineering aspects of sensory systems and sensorimotor
integration.
During three weeks in June and July of 1998, the fifth “Neuromorphic Engineering” workshop
was held at the Telluride Summer Research Center (TSRC) in Telluride, Colorado. The work-
shop was directed by the founders of the workshop, Profs. T. Sejnowski from the Salk Institute
and UCSD, C. Koch from Caltech and R. Douglas from the University of Z¨urich and the ETH in
Z¨urich, Switzerland as well as the new co-Directors, S. Shamma and A. Cohen both of the Univer-
sity of Maryland. G. Indiveri, University of Z¨urich, and T. Horiuchi, Johns Hopkins University, also
served as major coordinators, as the new generation of staff begin to be phased into leadership roles.
There were several additional staff and technical assistants, drawn from the various laboratories in-
volved. The workshop hosted a total of 67 participants from academia, government laboratories, and
industry, whose backgrounds spanned physics, robotics, computer science, neurophysiology, psy-
chophysics, electrical engineering, and computational neuroscience (see Appendix A for a complete
listing).
1.2
Progress
As in previous years, the three-week workshop combined tutorials, lectures, and projects on a wide
range of topics. The workshop, however, is evolving with significantly added emphasis on initiating
long-term projects, while using the annual workshops to coordinate indisciplinary and international
cooperation. Projects have focused on multichip neuromorphic systems that provide basic sensori-
motor reflexes, and simple adaptive and learning behaviours for small neuromorphic robots. While
this new emphasis might have taken time away from the tutorials, it did provide a hands-on experi-
ence that many of the participants found fruitful and stimulating.
The total number of participants was kept down, as this has been found to be important for
good working groups. This year, as last year, we emphasized non-visual sensors, placing a focus on
audition. This shift encouraged the integration of auditory projects with visual guidance projects. A
4

number of potentially exciting collaborations for future work have sprung up from this integration.
This year we made significant progress on the development of serial address-event representa-
tion (SAER), a technological advance which will facilitate efficient and expanded interchip commu-
nication and with communication between different devices. This groundbreaking work is critical
for building multichip systems and will continue to be stressed.
We also continued to have considerable success with the small mobile Koala robots equipped
with a Motorola 68C311 processor as a common platform for neuromorphic engineers. The robots,
produced and marketed by K-team, in Lausanne/Switzerland were interfaced with visual and au-
ditory sensors and used for sound tracking, and obstacle avoidance through either tethers or au-
tonomously by downloaded cross-compiled programs over a serial port. Other robots implemented
lamprey-like creatures that could generate traveling wave motions, and were used to explore the be-
havior of systems of coupled oscillators. The lamprey-bots will eventually be equipped with vision
and perhaps olfactory sensors. Some of these robots emerged from collaborations established at
previous workshops, and some of the others will continue to be the focus of new collaborations.
As a consequence of the Telluride workshop experience, the community of neuromorphic engi-
neers has now grown to encompass a web of researchers at many universities and companies, both
in the US as well as in Europe. Participants who shared the intense non-stop experience of former
workshops now send students to the workshop, and several request an opportunity to return them-
selves. The community is growing, moving to new institutions, and spawning new collaborations.
1.3
Future Aims
Our specific goals for next years workshop can be summarized as:
1) Expansion of projects on auditory and olfactory processes, and more sophisticated visual
perceptual work.
2) Expansion of multi-modality systems that involve attentional mechanisms, sensorimotor reflexes,
and adaptive behaviors in small neuromorphic robots.
3) Select and enhance a few long-term collaborative projects that contribute to the current central
research questions of neuromorphic engineering, such as the applications of floating gate technol-
ogy, fabrication and testing of multichip neuromorphic systems. This is an important step towards
making the workshop an annual meeting ground for exploring new and innovative research direc-
tions.
5

Chapter 2
Telluride 1998: the details
Much of the basic organization for Telluride 1998 was identical to the organization we had in place
in previous years, except that everything was more streamlined.
2.1
Applications to Workshops
We announced the workshop via our existing Telluride home-page on the World Wide Web, via
email to previous workshop participants as well as to various mailing lists on January 22, 1998.
Examples of mailing lists that we announced the workshop to, are:
Includes the Southern California neural network, neuromorphic en-
gineering community.
International mailing list with at least 1,000 subscribers in the neural
network/connectionist area.
Mailing list for computational neuroscience, primarily the group that
attends the annual CNS Meetings in July.
The text of the announcement is listed in Appendix C.
We received 76 applications, of which we selected 32 as participants for the workshop. We
also invited a few key speakers from academia, government facilities and industry to contribute
presentations as well as participate in the workshop.
The number of well-qualified applicants was high and many of the applicants that were not
accepted would have made good participants. The selection of participants for the workshop was
made by the three main organizers of previous workshops (R. Douglas, C. Koch and T. Sejnowski),
all of whom received copies of the full applications .
We selected participants who had demonstrated a practical interest in neuromorphic engineering,
had some background in psychophysics and/or neurophysiology; could contribute to the teaching
and the practicals at the workshop; could bring demonstrations to the meeting; were in a position
to influence the course of neuromorphic engineering at their institutes; or were very promising
beginners. Finally, we were very interested in increasing the participation of women and under-
represented minorities to the workshop and actively encouraged applicants from companies. Travel
expenses for participants from industry were paid by the companies.
The final profile of the workshop was satisfactory (see Appendix A). The majority of participants
were advanced graduate students, post-doctoral fellows or young faculty; six participants came from
6

non-academic research institutions (D.D. Lee, M. Lades, M. Slaney, N. Srour, M. Tilden, A. Wuen-
sche) and one from Industry (M. Nomura). Of the 55 selected participants (not counting organizers
and staff), 31 were from US institutions, with the remainder coming from Switzerland (9), United
Kingdom (4), Australia (2), Austria (2), France (2), Canada (1), Germany (1), Israel (1), Japan (1)
and Spain (1). Nine of the participants were women and one Afro-American.
2.2
Funding and Commercial Support
This year we asked participants to pay a registration fee of $250 in order to reduce the workshop
costs. The registration fee accounted mainly for the running expenses of the workshop. The main
workshop costs (including student travel reimbursements, equipment shipping reimbursements, ac-
commodation expenses, etc.) were funded by the following sources:
The U.S. National Science Foundation (40% of the total credit)
The Gatsby Foundation, London (23% of the total credit)
The Engineering Research Center, Caltech (12% of the total credit)
NASA (11% of the total credit)
The Office of Naval Research (8% of the total credit)
We would also like to thank the following companies for their support:
Tanner Research for providing the VLSI layout tools, Ledit, TSpice and LVS.
K-Team for providing and supporting the Khepera and Koala robots.
Altera Corp. for providing a UP1 student development board.
Mathworks, Inc. for providing the software package MATLAB.
2.3
Local Organization
Much of the workshop organization was handled by an interactive webpage that allowed participants
to “log-in” and select their accommodations (e.g., room type, roommates), enroll in various work-
shops, inform the organizers of any hardware they planned to bring, and request software packages
that they needed. This webpage also provided full accounting details (actual and estimated ex-
penses) to organizers at all times. Information about each participant’s expenses and what fraction
of these expenses the workshop would reimburse were included.
The exact amount of funding was not expected to be available until after the workshop at which
time, a quick, accurate assessment of our expenses would be necessary to speed up the reimburse-
ment process.
The information we gathered through the webpage before the beginning of the workshop al-
lowed us to better organize the lectures and tutorials on one side and to improve on the housing
arrangements on the other (for example, participants had a chance to choose their condo-mates and
condominium locations even before arriving in Telluride). All of the housing arrangements were
carried out in collaboration with the Telluride Summer Research Center (TSRC), using the work-
shop’s interactive web-pages. By obtaining longer-term contracts with local condominiums, we
were able to provide adequate housing at reasonable rates.
7

As in previous years, the workshop itself took place in a old, but beautifully renovated public
school near the center of town. We had four large rooms available for our workshop:
1. for the talks and tutorials,
2. for the aVLSI CAD workstations, personal laptops, an olfactory system and related equip-
ment,
3. for the circuit testing beds, auditory processing equipment and walking robots building/testing
space,
4. for the pan-tilt/silicon retina tracking setup and for building, programming and experimenting
with the Khepera and Koala robots.
The TSRC also rented the school and provided us with a very able, local assistant for photo-
copying, buying supplies and for local public-relations.
We interacted with the local community by giving three public talks and a special presentation
for the local children’s summer program (between the ages of 8 and 14) and by marching under the
banner of ”Neuromorphic Engineering” in the local 4th-of-July parade. For the second consecutive
year we won second-place for our imaginative rendering of ”Neuromorphic Engineers”, losing the
first place honor (again) to the local ”Cows of Telluride”.
2.4
Setup and Computer Laboratory
The software/hardware setup lasted from Wednesday, June 24 to Saturday June 27. Appendix B
contains the list of all the hardware facilities that were present at the workshop. With the support
from two system administrators, the setup of 20 computers went relatively smoothly. The computers
were fully networked and provided various internet services such as remote logins, file transfers,
printing, electronic mail, and a world wide web server.
The computers where divided into three usage areas. The first was a general computer lab for
running simulations, designing circuits, running demos, writing papers, or general internet access
such as web browsing. Another set of computers where used to control, collect, and process data
from robots. The third set of computers were used as VLSI test stations.
Each participant at the workshop was given an account from which they could read and send
electronic mail and transfer demonstration programs or operate them from their home computers
through the network. Standard software was also available including various simulation and design
packages which were specifically requested before the beginning of the workshop such as NEU-
RON, GENESIS, ANALOG, L-edit, and Matlab.
Throughout the entire course, we supported workstations, robots, oscilloscopes and other test
devices brought by all participants.
A World Wide Web site describing this year’s workshop, including the schedule, the list of
participants, its aims, and funding sources can be accessed at http://www.ini.unizh.ch/telluride981.
The Web site including information about the 1994, 1995, 1996, and 1997 workshops can be
accessed at http://www.klab.caltech.edu/˜ timmer/telluride.html
The computer lab proved to be very constructive since it not only allowed participants to demon-
strate and instruct others about their software, but it offered the opportunity for others to make
suggestions and improvements.
1We strongly recommend that the interested reader scan this homepage. It contains many photos from the workshop,
reports, lists of all participants etc.
8

A large truck was rented in Pasadena, CA, loaded with computers and chip-testing equipment
from various CNS laboratories at Caltech and was driven to Telluride. In Telluride, heavy-duty
extension cords were strung from six other rooms to provide enough power for all of the computers.
At the end of the course, these computers were returned via the van to Pasadena. Robots and some
computers were also shipped from the Institute of Neuroinformatics in Z¨urich, from the University
of Maryland and from Johns Hopkins University.
2.5
Workshop Schedule
The activities in the workshop were divided into three categories: formal lectures, tutorials and
project workgroups.
The lectures were attended by all of the participants. Lectures were presented at a sufficiently
elementary level to be understood by everyone, but were, nevertheless, comprehensive. The first
series of lectures covered general and introductory topics, whereas the later lectures covered topics
on state-of-the-art research in the field of neuromorphic engineering. It was found that two hour-
and-a-half lectures rather than three one-hour lectures in the morning session was better for covering
a topic in depth as well as to allow adequate time for questions and discussions.
The afternoon sessions consisted mainly of tutorials and workgroup projects, whereas the evenings
were used for the discussion group meetings (which would often continue late into the night).
Sundays were left free for participants to enjoy the Telluride scenery. Typically, participants
would go hiking. This was a valuable opportunity for people to discuss science in a more informal
atmosphere and catch up on the various projects being carried out by the other participants.
The schedule of the workshop activities was as follows:
Sunday 28 June
Evening:
19:30 Joint aVLSI & Floating Gate Tutorial Lecture (Chris Diorio)
Arrive in Telluride
Condo Check-In
Wednesday 1 July
Evening:
17:00 Welcome cocktail party (to be held at Christof Koch’s
Morning lectures:
place!- details to follow)
Computation in the Auditory System .
(Leslie Smith)
19:00 Tour of the workshop facilities (@ the schoolhouse)
Computational Vision for Sensorimotor Control and Navigation
(Jim Clark)
Monday 29 June
Afternoon Workgroups:
1:30pm - 2pm: Robot tools introduction
Morning lectures:
2pm - 4pm: Floating Gate Tutorial
Welcome and Workshop Intro.
(Christof Koch & Staff)
Coffee and Donut Break (as well as tea, bagels and croissants!)
4pm - 6pm: aVLSI Tutorial
Basic Biophysics and Neuron Models .
(Christof Koch)
Evening:
Go eat Lunch!(see bottom of page)
19:30 On-chip Learning Discussion Group
Individual Work on Projects
Afternoon:
14:00 - 16:00 Workgroup descriptions (see bottom of page)
* Basic aVLSI tutorial overview
(G. Indiveri, J. Kramer)
Thursday 2 July
* Floating-gate workgroup overview
(C.Diorio)
* AER workgroup overview
(T. Horiuchi)
Morning lectures:
16:00 - 18:00 Discussion Group Proposals/Organization (see
Towards Design Principles for Locomotion: lessons from biology.
bottomof page)
(Avis Cohen)
Acoustic Sensor Technology for Army Applications . (Nino Srour)
Evening:
19:30 Introductions by participants (see bottom of page)
Afternoon Workgroups:
12 pm - 2 pm : FIRST Locomotion Project Group Meeting
Tuesday 30 June
2 pm - 4 pm: Floating Gate Tutorial (lecture room)
Auditory Discussion Group
Morning lectures:
4 pm: FIRST AER project group meeting/lecture (VLSI room)
Circuits in Neocortex
(Rodney Douglas)
Neuromorphic aVLSI Systems .
(Giacomo Indiveri)
4 pm - 5 pm: aVLSI Tutorial
Afternoon Lecture:
Evening:
14:00 Neuromorphic Behaving Systems
(P.Verschure)
17:00 BBQ at Telluride Lodge
Workgroup descriptions
20:00 - 21:00 Public Lecture (Rodney Douglas )
* aVLSI vision sensors for behaving robots
(G.Indiveri)
Individual Work on Projects
* Silicon Hearing Chips
(A.van Schaik)
* Locomotion/Robots workgroup overview . (M.Tilden, A. Cohen)
Friday 3 July
9

Morning lectures:
Evening:
10:30 pm - 12:00 pm: Design, Evolution and Analysis of
7:30 pm What is Computation Discussion Group
Biologically-Inspired Control Systems for Walking (Randy Beer)
9:30 pm Neuromorphic Engineering for Prosthetics Discussion
12:30 pm - 2:00 pm: About Cochlear Models, the psychophysical
Group
scale for speech perception and speech recognition .
(Andreas
Individual Work on Projects
Andreou)
Afternoon workgroups:
Thursday 9 July
3:30 pm - Auditory Project Group
2pm - 4pm: Floating Gate Tutorial
Morning lectures:
4pm - 6pm: aVLSI Tutorial
Silicon Retinas and CMOS imagers .
(Tobi Delbruck)
Silicon Motion Sensors .
(Chuck Higgins,Reid Harrison)
Evening:
6:00 pm TNose Project Group
Afternoon workshops:
12:00 - 2:00 pm: Locomotion Project Group
7:30 pm - What is Computation? Discussion Group
2pm - 4pm: Floating Gate Tutorial
9:30 pm - Neuromorphic Engineering for Prosthetics Discussion
Group
4pm - 6pm: aVLSI Tutorial
Individual Work on Projects
Evening:
Preparations for Independence Day Parade!
17:00 BBQ at Telluride Lodge
20:00 - 21:00 Public Lecture (Mark Tilden )
Saturday 4 July
Individual Work on Projects
Fun:
Friday 10 July
9:00 am Meet at schoolhouse to prepare for parade
9:30 am Line up at the parade beginning (Colorado and Willow?)
Morning lectures:
Modeling area MT cell response (MT-MST models)
10:00 am Independence Day Parade!
(MasahideNomura)
& BBQ lunch (Town Park)
To be announced (VLSI neu MOS)
(Brad Minch)
Evening:
12:00 - 1:00 pm: Machine Olfaction
(Tim Pearce)
After dark: Independence day Fireworks
Afternoon workshops:
Individual Work on Projects
2pm - 4pm: Floating Gate Tutorial
4pm - 6pm: aVLSI Tutorial
Sunday 5 July
5:00 pm - 6:30 pm - AER Project Group Lecture - Arbiters I
(LectureRoom)
Free
6:30 pm - 7:00 pm - AER Project Group Progress Meeting
(Lecture Room)
Monday 6 July
Evening:
Morning lectures:
7:00 pm Helicopter Competition Video !
The AER Communication Protocol
(Kwabena Boahen)
7:30 pm Flying Robot Discussion Group (Lecture room)
Spike-based Computation
(Wolfgang Maass)
Individual Work on Projects
Afternoon workshops:
2pm - 4pm: Auditory Group meeting
Saturday 11 July
Floating Gate Tutorial
Morning lectures:
4pm AER project group meeting
Bottom-Up and Top-Down Models of Visual Attention
4pm - 6pm: aVLSI Tutorial
(ChristofKoch)
Evening:
Premotor theory of attention
(Jim Clark)
7:30 pm On-Chip Learning Discussion Group
Robot Competition! - Mark Tilden
7:30 pm Genetic Encoding Discussion Group
Afternoon workshops:
9:30 pm Neuromorphic Engineering for Prosthetics Discussion
2pm - 4pm: Floating Gate Tutorial
Group
Evening:
11:00 pm Neuron modeling tutorial
Optional overnight mountain hike
Individual Work on Projects
Individual Work on Projects
Tuesday 7 July
Sunday 12 July
Morning lectures:
Free day
Analysis of the Lamprey locomotion system
(Avis Cohen)
Evening:
Modeling the swimming of eel-like creatures
(Thelma Williams)
8:00 pm: Review of progress
Afternoon workshops:
12:00 - 2:00 pm: Locomotion Project Group
Monday 13 July
2 pm - 4 pm: Floating Gate Tutorial
4 pm: AER Project Group
Morning lectures:
Attention and Intention in Parietal Cortex
(Terry Sejnowski)
4 pm - 6 pm: aVLSI Tutorial
Oculomotor Control Systems .
(Terry Sejnowski)
Evening:
7:30 pm Visual Motion Discussion Group (Alan Stocker:
Afternoon workshops:
”Computationof Optical Flow in a
2pm - 4pm: Floating Gate Tutorial
Cooperative Manner - an aVLSI Implementation”)
4pm - 6pm: aVLSI Tutorial
7:30 pm On-Chip Learning Discussion Group
5:00 pm - 6:30 pm: AER Project Group Lecture - Arbiters II
Individual Work on Projects
Evening:
7:30 pm: Attention and Motor Control Discussion Group
Wednesday 8 July
(An aVLSI, Spike-Based Attentional System (Timmer Horiuchi))
9:30 pm: On-Chip Learning Discussion Group
Morning lectures:
Individual Work on Projects
Recurrent neuronal circuits in cortex.
(Rodney Douglas)
VLSI Implementations of Pattern Generators
Tuesday 14 July
and Intersegmental Coordination
(Stephen DeWeerth)
Afternoon workshops:
Morning lectures:
1:00 pm River Rafting Trip (Be sure to sign up both in the robot
The Auditory vs. the Visual Pathway .
(Shihab Shamma)
roomand at Telluride Sports!)
Energetics of Information Processing
(Andreas Andreou)
10

Afternoon workshops:
Afternoon workshops:
12:00 - 2:00 pm: Locomotion Project Group
12:00 - 1:00 pm: Locomotion Project Group
2pm - 4pm: Floating Gate Tutorial
1:00 pm - 2:00 pm: ”The Dynamics of Discrete Networks:
4pm - 6pm: aVLSI Tutorial
Implicationson Self-Organization and Memory”
5:00 pm: AER Project Group Research Presentations: - 1. Chuck
.
(Andy Wuenche)
Higgins,2. Steve DeWeerth
2pm - 4pm: Floating Gate Tutorial
Evening:
4pm - 6pm: aVLSI Tutorial
7:30 pm - Visual Motion Discussion Group (Chuck Higgins:
”Locationof Optical Flow Singular Points using an aVLSI sensor”)
Evening:
17:00 BBQ at Telluride Lodge
Individual Work on Projects
20:00 - 21:00 Public Lecture
Wednesday 15 July
Individual Work on Projects
Morning lectures:
Friday 17 July
Analog VLSI building blocks for an electronic auditory pathway
(Andre van Schaik)
Morning:
Computational Models of Audition .
(Malcolm Slaney)
Group presentations/demos
12:30 - 2 pm: Attention and Motor Control Discussion Group
Afternoon:
Afternoon workshops:
Work on project group and personal reports
2pm - 4pm: Floating Gate Tutorial
18:00 some ROBOT ROOM Computers ARE SHUT DOWN!
4pm - 6pm: aVLSI Tutorial
Evening:
4:00 pm - AER Project Group - Arbiters III & project progress
20:00 Dinner at Leimgruber’s with Award Ceremony
discussion
Evening:
Saturday 18 July
7:30 pm What is Computation? Discussion Group
9:30 pm On-chip Learning Discussion Group
Morning & Afternoon
Individual Work on Projects
12:01 am ALL COMPUTERS ARE SHUT DOWN!!!
PACK UP and LOAD TRUCK
Thursday 16 July
Sunday 19 July
Morning lectures:
8:30am - Assessing Observability of Visual Tracking Techniques
(NicolaFerrier)
Check-out and departure
10:30 am - AER Project group meeting - Arbiters IV
(remember! no default housing on this evening)
11

Chapter 3
Tutorials and Project Workgroups
The three tutorials were an opportunity for participants to acquire hands-on experience with the
dominant technology upon which Neuromorphic Engineering is based. These tutorials were crucial
for disseminating practical knowledge amongst the participants, especially for newcomers to learn
the basics and quickly come up to speed with the rest of the group. For many biologists and com-
putational modelers, this was their first friendly opportunity to learn about analog VLSI technology
with an eye towards neural computation.
In the following the major contributors to the tutorials and projects have been listed in parenthe-
ses.
3.1
VLSI Basics
(Liu, Indiveri and Kramer)
In this practical, we covered topics ranging from transistor physics, and characteristics to simple
circuits and circuit technology. We also demonstrated the use of software tools to simulate circuits,
to do circuit layout, and to use layout-vs-schematic (LVS) verification tools.
Participants attended daily lectures on these topics and either did hands-on measurements on
transistor characteristics with chips that were prefabricated for this tutorial or they learned to use
various software tools on the workstations provided. In the hands-on labs, participants also learned
to use different test equipment and software for collecting data from the chips.
Most participants found that the three weeks allocated for this practical were insufficient for
them to fully grasp the material, although they did report that they had sufficient information to
pursue this exercise on their own. We provided the participants with documentation of the lab
exercises, lecture notes, and public domain circuit simulation software for future use at their own
university/research institution.
3.2
Floating Gates
(Diorio, Minch and Hasler)
This practical consisted of extensive lectures on the physics of hot electron injection, tunnel-
ing, and high-voltage circuits for the control of floating gates. An excellent collection of notes was
provided. Experiments were done with floating gate chips, fabricated for the workshop tutorial,
containing silicon-synapse circuits that adapted over a time period measured in days. In the labora-
tory sessions the use of pulse-modulation techniques for incremental control of floating gate voltage
12

was investigated. Participants of this tutorial had already a basic knowledge of circuit design, but
found the topic of (analog) floating gate storage new and exciting.
3.3
Interchip communication
(Boahen and Horiuchi)
As in previous years, Kwabena Boahen gave a comprehensive series of lectures on pulse-
generation and integration, formalism for self-timed communication protocols, and intra-chip ar-
bitration. One-dimensional retina sender-receiver chips were available for hands-on investigation,
as well as two-dimensional systems for demonstration.
This year there was increased participation in this tutorial and in the projects involving this tech-
nology (see also Section 6.2). This expresses the fact (which also emerged last year) that there is a
growing need for the development of interchip communication techniques to design more elaborate
multichip systems and to interface these new sensors to robots and other digital systems.
3.4
Project Workgroups
The workgroup meetings gave people with common interests a chance to get together and discuss
their area in detail, to establish the most pressing questions in that area, to determine the state-of-
the-art, and to make plans for future developments. Most importantly, project workgroups gave
many of the participants the opportunity to investigate their research topics practically, using the
infrastructure and the tools offered by the workshop. In particular, the workgroups typically con-
sisted of participants of different experience levels, different scientific backgrounds, and different
institutions.
The project groups covered topics that ranged from artificial olfactory systems to flying robots.
In order to provide a comprehensive summary, we divided the projects in three main categories: neu-
romorphic robots, auditory processing and address event representation. Some of the groups spent
many nights agonizing over the various technical points, only briefly summarized in the following
section.
13

Chapter 4
Neuromorphic Robots
4.1
Introduction
(Paul F.M.J. Verschure)
This year the Neuromorphic robotics working group turned out to attract many of the regis-
tered participants in the workshop. In that sense a trend continued from the previous year. During
the 1997 workshop a concentrated and coherent effort was made to introduce mobile robots in the
workshop program. The collaborations established during 1997 have led to a number of publica-
tions by different workshop participants on experiments which originated at the workshop and the
organization of a workshop on its central theme at the conference Neural Information Processing
Systems 98. This demonstrates the combination of mobile robots and neuromorphic engineering
can provide an ideal vehicle to address issues central to the field such as: the real world evalua-
tion of neuromorphic devices and the importance of addressing system level issues involving hybrid
(digital-analog) solutions. Most importantly, however, this approach facilitates collaborative efforts
between different subgroups active during the workshop. Hence, one goal of this years working
group was to strengthen the collaborative components of the workshop using mobile robots as its
medium. Based on the discussions and projects developed during 1997 the practical aim of the
workshop was to organize subprojects in such a way that they could converge on a common goal:
“a complete system”. This plan is illustrated in Figure 4.1.
The aim was to realize a system which would combine visual, auditory, and olfactory neuromor-
phic systems in the control of a mobile robot. This goal was a natural step building on the results of
the previous workshop were these sensory systems were investigated in isolation. Roughly the task
would consist in making the walking 6 feet high robot Roswell (Mark Tilden) move towards sound
sources, associate the basic orienting responses with particular visual features, and subsequently
redisplay these orienting responses in the absence of the auditory cues. These learned responses
would be triggered by the detection of specific smell by the olfactory system. This ambitious goal
could motivate individual participants of the workshop, facilitate activities in different subgroups
by providing goals, and allow collaborations across their boundaries.
4.1.1
Infrastructure
In order to facilitate the realization of the overall goal of the working group the following steps were
taken before the workshop:
14

Working group Neuromorphic robots
Learning
Visual saliency
Paul Verschure
Paul Verschure
Auditory: Cochlea
Andre v. Schaik
Learning on mobile robots
Model of saliency system
David Klein
Distributed Adaptive Control
Control orienting responses
Andreas Andreou
Alternative learning models
Robot control through sound cues
* Khepera-Koala, Simulation.
* Khepera-Koala, Simulation.
Speech, pitch
Binaural soundsource localization
* Boards, interfaces, software
Visual: Retinae
Giacomo Indeveri
Tobi Delbruck
Neurobots
Roswell
Robot control using 1D and 2D
Retinae
Mark Tilden
Feature extraction.
* Boards, interfaces, software
Olfaction: TNose
Tim Pearce
Mission:
Locomotion
Build model of Glomeruli
Olfactory encoding
Avis Cohen
* Win 4th of July parade
Feature extraction.
Mark Tilden
* TNose HW, Simulation, interface.
Model of locomotion
* Build "complete" system
Lamprey CPG, Nervous Nets
* Walkers, Simulation, interface.
Figure 4.1: Neuromorphic robotics working group plan
Previous events have shown that the major problems in realizing projects at a more mature
scale are found at the interfaces between the different technologies involved. In anticipation of
this problem discussions were initiated before the workshop on the basic interface properties
of both visual and auditory neuromorphic sensors.
In order to facilitate the use of mobile robots different software packages were put together,
tested, and documented using html (with the assistance of Regina Mudra, a participant of the
1998 workshop, and Mark Blanchard, who participated in 1997 and is presently working at
INI-Z¨urich) based on well tested environments used in our own research.
IKhep: A small scale C based package with a MatLab graphical user interface provided
an easily accessible environment for experiments with the microrobot Khepera and the
mobile robot Koala (with Regina Mudra).
KhepDac: A C based application implementing neural learning systems which are in-
terfaced to a mobile robot.
RetinaMove: A C program with a MatLab GUI for integrating a aVLSI retina in the
control of a Koala robot. This program could be cross-compiled to the local CPU of the
15

robot to support autonomous operation. This program helped people to get started using
aVLSI devices on Koala (with Giacomo Indiveri and Mark Blanchard).
IQR421: A distributed simulation environment for the construction of large scale neural
systems which can be interfaced with external devices (robots, camera, aVLSI retinae,
etc.). IQR421 was used to realize more large scale projects (with Mark Blanchard).
In order to support the activities in the working group a number of lectures were prepared
dealing with the pragmatics of this type of research, its basic methodology, and a number of
issues relating to neural forms of behavioral control.
K-team (Lausanne, Switzerland), the producer of the used mobile platforms again provided a
large number of Koala and Khepera robots in support of the workshop.
4.1.2
Results
Overall the working group has been very effective in creating a large number of constructive activ-
ities which are discussed in the different individual reports. Many of these activities had an impact
on future research pursued by the participants. Central subprojects aimed at the realization of the
“complete” system were:
1: A software based system which allowed Koala to orient towards sound sources (see Sec-
tion 4.4).
2: Orienting responses to auditory cues implemented on Koala using aVLSI cochleas and a
software based interface (see Section 5.1).
3: Interfacing Koala to existing vision chips (see Sections 4.6, 4.10, 6.3).
4: The development of a tracking system using an active pan-tilt unit and a silicon retina (see
Section 4.9).
5: The development of a model of the olfactory bulb used in the classification of olfactory
cues derived from an artificial nose (see Section 4.3).
6: The development of an autonomous mobile robot based on models of lamprey locomotion,
using central pattern generators (see Section 4.11).
In conclusion the overall goal of a “complete” system was not achieved for very instructive rea-
sons, although tremendous progress has been made in the subprojects. The main problems were of
a practical nature, for instance, project 1 prepared the ground for the inclusion of aVLSI cochleas
in the orienting task which were initially tested in project 2. Project 1 not only realized all the con-
trol necessary but also provided the basic software interfaces for the cochleas using IQR421 (this
involved a large amount of on the spot software development in an infrastructure which was not
setup for that purpose). The aVLSI cochlea system, which performed very well in isolation, was,
however, not easily accessible for further experiments given the risk of damaging the setup. This
excluded further integration. A similar problem occurred with the 2D aVLSI retina, with which
very interesting experiments have been performed in isolation, but which unfortunately did not sur-
vive the last week of the workshop. Unfortunately, a similar faith befell Roswell which burned a
number of its motors during a test run. Hence, our ambitious goal of a “complete” system provided
16

a very important reality check for the technology we attempt to develop. It showed that at a small
scale a lot can be achieved, a confirmation of last years experience, and as such it facilitated these
activities tremendously. But, a large part of our future research is at the level of systems, and exactly
at this level a number of serious limitations of our present capabilities were revealed. Each failure,
however, provides an important lesson. One observation is that the failures were in the domain of
the technology employed, and not in that of our concepts for sensory processing, or behavior con-
trol. Our observations on, and especially experience of, the above technical limitations are driving
ongoing research. The above sub-projects have extended beyond the duration of the workshop and
are pursued in a further collaboration between Zahn, Klein, Verschure and others.
The secondary goal of winning the 4th of July parade was also not achieved, despite the inclu-
sion of a “dancing” Koala couple (with Andre van Schaik). Again the second place was reached.
Further inquiry revealed that the jury had anticipated a better choreography of the human partici-
pants. The different demos prepared for the annual open-house for local school children proved to
be very effective and received a strongly positive response.
The working group on Neuromorphic Robotics has provided an important platform to evaluate
our present technology. For instance, about 15 participants were introduced to the package IQR421.
With several of these further collaborations have been established based on this software environ-
ment. Presently at the computer science department in Graz, Austria (Maass) the package DacKhep
is used in a course on models of learning applied to mobile robots.
4.2
Learning: on-line learning using the Khepera robot
(Ranit Aharonov-Barki, Yuri L´opez de Meneses, Nicol Schraudolph)
We designed an artificial neural network which used Reinforcement Learning (RL) to learn to
control a Khepera minirobot. The task of the robot was to find a target area while avoiding obstacles
scattered around the arena. The robot had 8 IR sensors and a one dimensional retina outputting an
array of light intensities. A light source is placed on the goal area, so the robot can use the retinal
input to track its position. The robot receives a reinforcement signal from its environment. Bumping
into obstacles (by saturating the IR proximity sensors) is penalized, and reaching the goal is highly
rewarded, but most of the time no reinforcement signal is received.
Most previous work in RL used Q-learning, a paradigm whereby the neural network learns
to match state-action pairs in order to maximize the reward. In this case, the number of actions is
infinite, because the motor commands are essentially continuous. Therefore we use a neural network
which issues stochastic motor commands as a function of a sensory state and, in parallel, estimates
the expected reward from the sensory state and the current action (motor command). By learning
the parameters of the stochastic motor process, the network can produce appropriate behavior for a
given situation, ranging from deterministic (exploitation) to stochastic (exploration).
4.2.1
The Network Design
The network we designed is composed of two sub-networks that implement two different functions.
The action network maps the sensory state (input) to motor commands (i.e motor velocities), while
the prediction network maps sensory input and planned action to a scalar called expected reward or
17

expected reward
left motor speed
right motor speed
mean
sigma
mean
sigma
......................
inputs
Figure 4.2: Structure of the neural network.
utility (U). (see fig. 1). Both networks are trained using backpropagation (BP) but with a different
propagated error. The prediction network’s goal is to predict the future reward associated with
taking an action at a certain state, and is thus trained using a standard RL prediction error. The action
network’s goal is to maximize U and is therefore trained by back propagating a constant positive
error (1 by default). So there are two processes acting in parallel, one working to find a policy to
maximize U and the other trying to estimate the U of that policy. We expect that the combination of
the two processes will lead the first network to estimate the real reward of the optimum policy (U*),
as computed by the second network.
The network consists of 23 input nodes - the 8 readings of the IR sensors and 15 retinal inputs
which are low-pass filtered readings sub-sampled every tenth value. The sensory input layer is
propagated to 4 neurons that code the mean and standard deviation of the motor velocities. The
actual velocities are then calculated by:
Lmotor MOTOR GAIN
L
L triangle
(4.1)
=
(
+
(
()))
Rmotor MOTOR GAIN
R
R triangle
(4.2)
=
(
+
(
()))
Where triangle is a random process with triangular probability distribution,
can have a
()
value between -1 and 1, and
is between 0 and 1. This is equivalent to the motor neurons having
a synaptic weight of MOTOR GAIN with the
neurons and a weight of c
MOTOR GAIN
=
triangle to the neuron. The value of connection c is randomly chosen at each iteration, and
()
18


stored for the error backpropagation step. The second network propagates the sensory input as well
as the actual motor commands through a single layer of weights, onto a linear-output neuron that
computes the estimated utility U.
The error in the utility prediction is computed as
t
U t U t
r t , where
(
)
=
(
)
;
(
;
1)
+
(
)
r(t) is the current reinforcement signal or reward. This estimation error is then back-propagated
to the sensor input layer and to the motor-command neurons. The weight update equation on the
sensor-to-utility network is a standard, one-layer gradient descent. The same error is used to modify
the weights in the motor-command to utility layer (the dashed arrows in 4.2). The update on the
second network is a little bit trickier, since the error is not the same (we are trying to maximize the
utility) and some of the weights are not modifiable. We back-propagate a constant, positive error
, which is equivalent to a desired target value of U+ . This error is backpropagated through
=
1
the network, but it only modifies the weights in the sensor-to- and sensor-to- layers.
4.2.2
Work done in Telluride
Presently we have coded the neural network and update algorithm in C, using KrOS, the Khepera
Operating System cross-compiler. The robot thus learns in a fully autonomous way, the serial
connection being used only to display on a terminal the messages sent by the robot. The robot tries
to learn to reach the goal area, marked by the light source and a dark paper that can be detected
by an additional light sensor pointing to the ground. The robot has 10 trials of 100 steps each to
learn. In between trials, the robot repositions itself randomly and autonomously, by switching to a
Braitenberg obstacle avoidance behavior, also coded in a neural network.
Figure 4.3: The Khepera robot, equipped with CSEM’s EDI retina, uses a light detector in
the front to detect the goal area (dark ground).
19

4.2.3
Conclusion and future work
At present the robot has not yet learned to navigate in a test area. Much time had to be spent
developing the network and updating algorithm, as well as compiling and debugging. We intend
however to pursue this project after the Neuromorphic Workshop ends. On one side, Yuri l´opez
de Meneses will fine-tune the robot and conduct the real-robot experiments and Nicol Schraudolph
will integrate his ELK1 weight-update algorithm to improve learning. Ranit Aharonov-Barki will
conduct a similar experiment on the Webots Khepera simulator using genetic algorithms (GA) to
solve the same problem. The Webots software compatibility with the real Khepera should allow
us to test the GA-evolved solution on the real robot too and establish a comparison between both
approaches.
4.3
Olfaction: modeling and experimenting with an artificial nose
This report describes the progress made on the artificial nose project (which became affectionately
known as ’TNose’) during the 1998 Telluride Neuromorphic Workshop. The purpose of attending
the workshop was to combine the artificial nose measurement and instrumentation set-up developed
at Tufts University Medical School, Boston, USA by Tim Pearce and others1(described in Section
4.3.2) with the neuronal modeling package Xmorph, developed at the Institute of Neuroinformatics,
ETH, Switzerland by Paul Verschure. For the first time, this would provide an opportunity both
to collect biologically realistic sensory data (i.e. from large numbers of chemical sensors that are
broadly tuned), as well as permit signal processing using biologically plausible neuronal models.
While existing electronic nose systems typically make use of small arrays of non-specific,
broadly-tuned chemical sensors, these tend to only comprise one sensor of each type or class in
order to maximize the diversity of the array as a whole. The biological system, however, not only
possesses a large repertoire of olfactory receptor protein classes (estimated to be between 300–1,000
in mammals, these are deployed in large numbers (approximately 107 in humans). The replication
of olfactory receptor proteins across a large population probably plays a number of different rˆoles in
the olfactory pathway, but it is clear that one emergent property of this arrangement is a sensitivity
enhancement, brought about by an increase in certainty in the sensory signal. Practical artificial
nose systems have yet to take advantage of an analogous implementation of this mechanism in the
biology, which is the purpose of this study. By having just one sensor of each class, artificial noses
are generally deprived of the opportunity to perform any statistical estimates of the signal, and so
demonstrate limited sensitivity.
In the optically-based measurement set-up used in this study, we deploy large numbers of os-
tensibly identical dyed silica microspheres which display sensitivity to a wide range of chemicals.
This enables us, for the first time, to exploit the statistics of the data obtained from an artificial nose
in order to investigate strategies for sensitivity enhancement. While standard statistical analyses of
these data have already been made (outlined in Section 4.3.2), the Xmorph neuronal modeling pack-
age provides an opportunity to apply some biologically plausible processing strategies (outlined in
Section 4.3.4).
4.3.1
Biological Relevance
A model for the way information is integrated at the first processing site of the olfactory bulb is
shown in Figure 4.4. Molecular stimuli diffuse through a mucous layer coating the olfactory epithe-
1Based upon an earlier system developed by John S. Kauer and Joel White, Department of Neuroscience, and using
sensors provided by Todd Dickinson and David Walt, Department of Chemistry.
20

lium to interact with a large population of receptor neurons of 300-1,000 different types. Receptors
demonstrate broadly-tuned responses to a wide range of odorants, displaying peak sensitivity to par-
ticular groups of compounds. Receptor neurons generally depolarise to generate action potentials,
and it is the pattern of activation across the receptor population as a whole that is thought to encode
the stimulus. The axons of receptor cells fasciculate through the cribiform plate to innervate regions
of the olfactory bulb known as glomeruli. These regions are typified by densely packed synapses
between the axons of receptor cells, the primary dendrites of mitral/tufted cells situated deeper in
the bulb, and periglomerular cells situated more superficially. Recent studies suggest that receptor
neurons express one or at most a few putative receptor proteins from a large family of the genome.
Furthermore, the axons of receptor cells expressing the same protein, or permutation of multiple
proteins, tend to project to a single glomerulus, or perhaps two neighboring sites. This arrangement
is schematised in Figure 4.4 where n receptors (where n is typically 2,500 in mammals) expressing
the same proteins converge onto a single glomerulus region.
Odour Stimuli
Receptor Neuron
Glomerulus
Figure 4.4: A schematic of the first stage of the biological olfactory pathway.
In the simplest scheme, we can consider the spike-trains generated by individual receptors as
statistically independent Poisson processes, where the probability of observing N
X action
(
=
)
potentials within a time-window, T is governed by the Poisson distribution
P
X
r N X
r e; r
(4.3)
(
=
)
=
X!
where r
ksT and ks is the mean firing rate expected for each stimulus, s. Since the receptor
=
cell displays preferential tuning to particular stimuli, we would expect ks to vary for a particular
receptor over a test-set of odorants.
21


to frame
grabber  in
computer
arc
emission filter
excitation
lamp
filter
dichroic mirror
microscope
lens
beads
computer
controlled
odorant delivery
Figure 4.5: Schematic diagram of the measurement set-up used to acquire chemosensory
data as part of the TNose experiment.
However, one effect of convergence of receptor input at the glomerulus might be to aggregate
multiple spike-trains over a period of time. So while the statistics of spike generation at receptor
cells may be governed by r, at the glomerulus, n r, spikes are expected on-average in any time-
window, T . So we can consider the spiking activity at each glomerulus as another Poisson process,
but now with time-constant g
n r.
=
We can derive the signal-to-noise ratio enhancement of this convergent architecture by consid-
ering the variance of the aggregated signal at the glomerulus, g, compared with the variance of the
individual receptor spike-trains, r.
=
SNR
g
g 1 2 pn
(4.4)
=
r =
r
=
p
and so we expect an improvement in sensitivity to follow n, with increasing receptor convergence,
n. This is a form of hyperacuity where the biology takes advantage of the statistics of sensory infor-
mation in order to generate system sensitivity that is greater than that of the underlying detectors.
In this report we will consider analogous arrangements in an artificial nose in order to enhance
sensitivity.
4.3.2
Measurement Set-up
Existing electronic nose systems are unable to exploit the statistics of sensory information, since
generally only a single sensor of each type or class is deployed so that limited size arrays can be
endowed with as much sensor diversity as possible. For this reason, we propose a novel approach
of deploying large numbers of identical sensor types in order to investigate schemes for sensitivity
enhancement. To produce large sensor numbers, ca. 3 m porous silica microspheres (beads) that
22

directly adsorb solvachromatic dyes have been supplied by Dickenson & Walt. The dye/matrix
combination alters its fluorescence properties under different chemical environments which can be
detected using a simple optical set-up, which is shown in Figure 4.3.2.
In this arrangement beads are deposited on a glass slide, and excited by green light (wavelength
530 nm). Optical quality bandwidth filters are used in order to ensure the light exciting the beads
is narrowband (10 nm optical bandpass filter). Under these conditions, the beads fluoresce at lower
energy, or longer wavelength (typically 640 nm) which is detected using a low-cost 8-bit resolution
CCD video camera. A dichroic mirror is used within the optical set-up to ensure that none of the
excitation wavelength is detected by the camera. The peak emission wavelength of the fluorescence
signal is modulated (both positively and negatively) by the presence of chemicals local to the bead
environment. By using a narrow-band optical filter, we can observe the modulation of the emission
spectra at one particular wavelength of interest, while applying a variety of chemical analytes. This
is viewed as a gray-scale intensity shift at the CCD camera. The optical hardware is computer
controlled under NI Labview in order to synchronize the sampling activities; illuminate beads, apply
odor delivery, and measure bead intensity shift.
(a)
(b)
Example Single Bead Response
Aggregated Bead Responses
0.16
0.14
0.14
1:10
1:15
1:10
1:24
0.12
0.12
1:15
1:40
1:61
1:24
0.10
0.10
1:80
1:40
sity
o
1:120
1:61
1:171
0.08
0.08
minosity
1:80
1:300
1:120
Lumin
Air
1:171
0.06
0.06
1:300
rdised
Air
a
0.04
ardised Lu
0.04
Stand
0.02
Stand
0.02
0.00
0.00
-0.02
0
2
4
6
8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40
0
2
4
6
8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40
1
3
5
7
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
1
3
5
7
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39
Frame
Frame
Figure 4.6: Bead responses to varying dilutions of saturated toluene vapor. (a) A single
bead response to toluene, indicating concentration discrimination down to 1:61 dilution,
(b) the aggregated mean response of 201 beads, indicating discrimination down to 1:80
dilution. Error bars indicate the standard error in the mean for the aggregated response.
As described by Pearce et al. (1998), independent measurements made within a single video
frame generate large data-sets of bead responses for statistical analysis. The beads demonstrate
large, reproducible, and reversible responses to most organic vapors. Figure 4.6(a) summarizes the
performance of a single bead in detecting varying dilutions of toluene at Standard Temperature and
Pressure (STP). Using a single bead response it is clear that discrimination is possible down to
dilutions of 1:61 of toluene at SVP, before the sensor response descends into noise. Figure 4.6(b)
shows how the aggregated mean signal of 201 bead measurements can be used in order to improve
the discriminability of the system, in an analogous way to the early stages of the biological olfactory
system considered in Section 4.3.1. Using the combined signal it is possible to discriminate 1:80 of
toluene at SVP, under the same conditions as for the single measurement.
By investigating randomly sampled subsets of the total pool of 201 measurements it was possible
to quantify this sensitivity enhancement, with increasing bead numbers. Overall, SNR enhancement
p
was shown to closely follow
n, with n beads measurements being averaged. This result indicates
that it is possible to implement a biologically inspired sensitivity enhancement scheme that also
closely follows our model of the same process in the biological system. As such, this scheme pro-
23

vides a practical method for sensitivity enhancement in artificial nose systems, that is independent
of on-going improvements to the sensor technology.
In this project, our purpose was then to compare the sensitivity enhancement performance ob-
tained using this simple statistical approach with the results obtained from a simple neuronal model
implemented using Xmorph.
4.3.3
Combining TNose and Xmorph
Xmorph was designed as a tool to satisfy a diverse set of requirements. Firstly, to be able to easily
define complex networks of heterogeneous neural structures that can be easily interfaced with ex-
ternal devices. Secondly, to permit system level (or macroscopic) descriptions of neural elements,
without losing biological relevance. In our experiment, the aim was to combine the modeling ca-
pabilities of Xmorph with the data acquisition system described in Section 4.3.2. This permits
the real-time analysis of large numbers of chemically sensitive bead measurements via a neuronal
model implemented under Xmorph.
In the first instance, data were shared by the two systems through image files. The software for
the data acquisition system was modified to generate TIFF images of each video frame during odor
presentation, and Xmorph was subsequently modified to directly read these image file. In this way,
during the early development of the combined system it was possible to “replay” the stored images
under different model conditions, for optimization purposes. The properties of the model are now
described.
4.3.4
Olfactory Bulb Modeling
Process TNose consists of 9 simulated neural populations (Figure 4.7).
Glomerulus1
Mitral1
Cortex
GlomerulusCell
MitralCell
CortexCell
N = 800
N = 9
N = 4
Receptors
Glomerulus2
Mitral2
ReceptorCell
GlomerulusCell
MitralCell
N = 2500
N = 800
N = 9
Granulle
GranulleCell
N = 36
Glomerulus3
Mitral3
GlomerulusCell
MitralCell
N = 800
N = 9
Figure 4.7: Circuit of process TNose. Each rectangle represents one population. The
name, cell type and size of each group is listed in the rectangle. red circles: excitatory
connections, blue rectangles: inhibitory connections. detailed properties of the used neu-
ron types and synapse types are listed in tables 4.1 and 4.2
A 50x50 image reflecting the bead responses was projected onto three types of Glomeruli and
Mitral populations. Each stream was supposed to have preferential responses to a specific odor. The
forward excitation to the Granule cells and their recurrent inhibition would impose selectivity in the
odor discrimination expressed by the read out system (conveniently called “Cortex”).
24

4.3.5
Cell types
Table 4.1 lists the properties of the 9 different cell types of process TNose
Name
Type
E
x
I
nh
Slope
P
Vm
ReceptorCell
IntegFire
1
0
0
1
0.8
0.95
GlomerulusCell
LinearThres
1
0
0
0
1
0
MitralCell
IntegFire
0.22
1
0
1
1
0.8
GranuleCell
IntegFire
1
0
0
0
1
0
CortexCell
IntegFire
1
0
0
0
1
0
Table 4.1: The cell types of process TNose
Most of the used cell types were of integrate and fire type. These cells would emit a spike in
case the integrated input exceeds a firing threshold.
4.3.6
Synapse types
Table 4.2 lists the properties of the 15 different synapse types of process TNose
Name
Arborization
Width
Height
P
Min
Max
Self connect
Glomerulus1-Ex-Receptors
BLOCK
21
21
0.05
0
0.1
Off
Glomerulus2-Ex-Receptors
BLOCK
21
21
0.05
0
0.1
Off
Glomerulus3-Ex-Receptors
BLOCK
21
21
0.05
0
0.1
Off
Mitral1-Ex-Glomerulus1
BLOCK
9
9
1
0
0.02
Off
Mitral2-Ex-Glomerulus2
BLOCK
9
9
1
0
0.02
Off
Mitral3-Ex-Glomerulus3
BLOCK
9
9
1
0
0.02
Off
Cortex-Ex-Mitral1
BLOCK
1
1
1
0
0
Off
Cortex-Ex-Mitral2
BLOCK
1
1
1
0
0
Off
Cortex-Ex-Mitral3
BLOCK
1
1
1
0
0
Off
Granule-Ex-Mitral1
BLOCK
1
1
1
0
0
Off
Granule-Ex-Mitral2
BLOCK
1
1
1
0
0
Off
Granule-Ex-Mitral3
BLOCK
1
1
1
0
0
Off
Mitral3-Inh-Granulle
BLOCK
1
1
1
0
0
Off
Mitral2-Inh-Granule
BLOCK
1
1
1
0
0
Off
Mitral1-Inh-Granule
BLOCK
1
1
1
0
0
Off
Table 4.2: The synapse types of process TNose
Most interconnections were one to one preserving the topology of the system. The Glomeruli
populations receive a broad projective field from the responses of the receptor cells. These are
the arborizations which, due to a learning process, were supposed to develop specific responses to
particular odors.
4.3.7
Results
In our experiments we evaluated whether stable identification to sequences of receptor responses to
odor stimuli could be learned, using a local correlation based learning rule. Learning would imply
25

that the glomerulus neuron would develop a “receptive field” which would express the prototypical
activation pattern triggered by a specific odor. Figure 4.8 gives three examples of bead responses to
pentanol taken from our bead response database (we used a total of 50 response patterns).
A: typical bead response
B: typical bead response
C: typical response of receptor population
Figure 4.8: A, B: Images taken from the response database showing a typical response of
beads to pentanol. C: Typical response of the cells of population Receptors to the beads.
As an evaluation of our learning method we continuously played 50 bead response patterns (to
increasing concentrations of pentanol) sinto the receptors of the model bulb. The receptors also
showed a spontaneous background activity of about 50 percent. As a test case (’distractor’) the last
pattern of the bead sequence was a mirror image of the preceding one. The assumption was that a
non-learning glomerulus would not cease to respond to this test image, while a learning glomerulus
would. One Glomerulus, GlomerulusPassive, received projections from the receptors with fixed
synaptic efficacies. The activation threshold of this cell was tuned to a minimal value in order for it
to respond to its inputs. The results of this test are summarized in Figure 4.9.
The average activation levels of the two evaluated glomeruli (Figures 4.9 B and C) follows the
activation level of the receptors (Figure 4.9 A). The learning glomerulus, however, in most cases
does not respond to the distractor while the non-learning glomerulus does. This implies that it has
developed a receptive field which appropriately distinguishes pentanol bead responses from other
activity patterns. As an illustration of the specificity of this learning process Figure 4.10 displays
the receptive field developed by this glomerulus neuron in this experiment.
The developed receptive field allowed the learning glomerulus to distinguish between the actual
pentanol bead responses and the distractor inserted in the image sequence. This gave it enhanced
performance compared to the non-learning case.
4.3.8
Conclusion and Future Work
The experiments performed during the 1998 workshop provided a stepping stone towards the de-
velopment of a biologically realistic model of the olfactory bulb which would discriminate between
odor responses derived from an artificial nose. Although, much additional work needs to be done
the present experiments demonstrated the feasibility of such an approach which incorporates both
digital and neuromorphic technologies.
Future work would require a comparison between the statistical detection and neuromorphic
models, both described in this report. Of particular interest would be to obtain a population of
26

A: Average response in receptor population
Group statistics
1.00
Act
0.80
Pot
0.60
0.40
Ex
0.20
0.0000
122148
122348
122547
122747
122946
123146
Time (mSec)
B: response of the non-learning Glomerulus
Group statistics
15.00
12.00
Act
9.00
Pot
6.00
Ex
3.00
0.0005
122167
122367
122566
122766
122965
123165
Time (mSec)
C: response of the learning Glomerulus
Group statistics
0.50
0.40
Act
0.30
Pot
0.20
Ex
0.10
0.0000
122156
122356
122555
122755
122954
123154
Time (mSec)
Figure 4.9: Responses of the olfactory bulb model. A: Average activity in the receptor
population. The drop in both traces indicate the presentation of the “distractor” at the end
of a sequence. B: The non-learning glomerulus. C: Learning glomerulus. Gray: Activity,
Purple: membrane potential, Red: excitatory input. Time window 122148 to 123165.
different bead classes, enabling the development of a model that is more faithful to the biology.
One method for achieving this would be to invite participants with an interest in olfaction to the
next workshop, in order to combine a variety of sensor technologies into a single processing model
under Xmorph. The authors intend to pursue this possibility before the next workshop.
Part of this work was supported by the National Institute of Health (NIH), the Office of Naval
Research (ONR), and DARPA.
4.4
Audition: auditory localization using the Koala robot
(T. Zahn, P.F.M.J. Verschure)
During the workshop I joined the auditory project group. My major goal was to implement a
sound localization algorithm to the KOALA robot in order to make him moving toward a self se-
lected target sound source. The localization is based on a software model of the Inner Ear, some
parts of the cochlear nucleus, the olivary complex and the inferior colliculus. All neuron models
have been derived from a leaky integrate and fire model implemented as AVLSI Test-chip. The
system is based exclusively on Interaural Time Difference (ITD) evaluation using two stereo mi-
crophones with a base of 25 cm. The resulting localization vector could be combined with visually
obtained localization information in order to improve the performance to multisensoric cues.
27

Figure 4.10: Receptive field acquired by the learning glomerulus, yellow circle to the right
hand side, in the pentanol series. Bright red (dark gray for BW viewers) patches indicate
strong synapses, while gray patches (light gray for BW viewers) indicate weak synapses.
We employed a Pentium 200 as host and used radio communication to transfer the sound signal
from the robot to the host. The control signal joined the IKHEP package of the Institute of Neuroin-
formatics (ETHZ) and was there transfered into motor commands for the KOALA wheels. After
some elaborate days with great support by Dr. Verschure we met our goal and the koala was moving
toward a clapping sound source.There is a video of this available in Zurich.
Based on the results we tried to incorporate the audio processing into the Xmorph simulation en-
vironment and we currently continue on this in collaboration between Zurich and Ilmenau. At the
same time we try to improve the model toward speech source identification, which we started to
work on in Telluride but couldn’t finish due to some library problems. The neuromorphic model
employed, is using the following components. Two all-pole-gammatone filter-cascades with 16
channels tuned between 100Hz and 2 kHz for the left and right Inner Ear.
Figure 4.11: functional structure of the hair cell-ganglion complex
A simple Hair Cell-Ganglion model shown in Figure 4.11 assigned once to each frequency
channel. 16 counter-propagating delay lines projecting to 16 x 33 coincidence detector cells of the
IF type shown in Figure 4.12.
And 33 auditory space map cells of the same nature representing the azimuthal plane with no
vertical or front-back information.
In front of the system there is an envelope based onset detection using 500 samples of the 44100
Hz sound signal to perform the computation. This will be sciped in the real AVLSI Implementation
we are currently working on. Furthermore the system will be extended by a frequency sharpening
layer for each time delay and a Winner Take All network for the localization in the azimuthal vector.
The whole system is spike based from the stage of the receptor cells to the localization vector. Here
28

Figure 4.12: extended model of the IF neuron assuming uniform synapses
the rate is calculated and transfered into a proportional motor signal. The model is also available
as MATLAB simulation package. The simulation of the activity in the azimuthal vector for a hand
claps from 30 degree right is shown in Figure 4.13.
Activity in the azimuthal vector
90
48
30
14
0
−14
−30
−48
−90
500
1000
1500
2000
time [samples]
Figure 4.13: spiking activity in the azimuthal vector resulting from a hand clap from 30
degree right
During the workshop the scientists of Prof. Shammas lab provided a hardware board performing
the same task including either the cochlea chip of Andreas Andreou or Andre van Schaik. Therefore,
we had some good discussions about the problems and advantages with the hardware and agreed
to collaborate after the workshop. Prof. Andreou provided me his cochlea chip to be used as
the front end of a truly analog localization system. In the project group we also discussed some
ways to include front-back information with Dr. Horiuchi and the problems with spike based WTA
structures with Dr. Indiveri and Dr. Horiuchi. We will stay in touch to exchange experimental
results on that as well.
Finally the time was again to short to get everything done, but we reached the major goal to make
the robot moving based on a neuromorphic model. To work with all of these scientists has been a
great experience and not only saved me a lot of communication efforts but also gave me some new
ideas and made me aware of problems and limitations in my models.
4.5
Vision: view based navigation using the Khepera robot
(James J. Clark, Regina Mudra, Nicol N. Schraudolph)
A recent study of route learning in automobile drivers (Beusmans et al, 1995) showed that people
only retained visual information about areas at which they needed to make a decision whether to
make a motor action, such as turning at an intersection. At other, ”passive”, locations, people
29


exhibited very poor recall of visual details. This suggests that drivers are using some sort of view-
based recognition of the scene and learn to associate these views with motor actions.
At the 1998 Telluride workshop we decided to see if we could train a Khepera mobile robot to
navigate a fixed route using such a view-based strategy. We modeled our approach loosely on the
work of Bachelder and Waxman (1995) who also developed a view-based technique for robot map-
learning. There were two major differences between our effort and that of Bachelder and Waxman:
first, we used a simple 3-layer neural network, rather than the complicated fuzzy-ART network used
by Bachelder and Waxman, and secondly we aimed at getting the robot to learn a ”route”, rather
than a complete spatial ”map”.
Our goal was to have a Khepera robot learn to navigate autonomously a route consisting of a
number of 45 degree turns separated by straight segments in an environment consisting of a number
of objects that could be potentially used as visual landmarks. A view of the environment is shown
in figure 1.
Figure 4.14: Robot’s environment
4.5.1
Input Preprocessing
Using the Matlab environment on a PC running Linux, an image frame would be acquired from the
on-board camera of the Khepera, processed, and then a motor command would be computed and
sent to the Khepera, whereupon the cycle would repeat. The color images acquired by the Khepera
were converted to monochrome and then subsampled to a size of 26x53 pixels. Subsampling reduces
the amount of information that needs to be handled by the neural network, but also reduces the
effect of small shifts in position on the view-based recognition process that the network has to learn.
These subsampled images were then bandpassed filtered to enhance edge features and to minimize
illumination variation effects. An example of such an image acquired by the Khepera in its testing
environment is shown in figure 1. The images were then subsampled further by extracting four
rows - rows 1, 9, 17, 25. This was done to reduce the dimensionality (to 4x53 = 212) of the input
vector on which the Principal Component Analysis was done, so that we could obtain the principal
components in a reasonable amount of time. We tried to compute principal components of larger
30

input images, but the unpredictable status of the lab computers (which were often rebooted to switch
between Linux and Windows) prevented run-times of more than a few hours.
A set of 1000 such 212 element images were acquired at random positions and orientations
of the robot in the environment. This set of images was then used to compute a set of Principal
Components. This was done by computing the eigenvectors of the covariance matrix, C = X’X,
of the matrix X whose rows correspond to the individual image vectors. X was, therefore, was a
matrix with 1000 rows and 212 columns. The covariance matrix was a square matrix with 212 rows
and columns. We decided (arbitrarily) to retain as principal components those eigenvectors of C
whose eigenvalues were greater than 10% of the maximum eigenvalue. This resulted in 50 principal
components.
In order to further reduce the effect of shift in position on the view recognition process we
windowed (with a Gaussian window) the rows of the 4x53 image array and then circularly shifted
the rows so that the centroid of the image was centered in the array. The windowing reduced any
adverse edge effects due to the circular shifting. We found that this bit of shift invariance improved
the learning rate of the network as it tried to learn a route.
4.5.2
Neural Network
The neural network we used was a standard three-layer feedforward network. The input layer con-
sisted of 50 units, associated with the pre-processed image vector. The output layer consisted of 3
units, which encoded the particular motor action to carry out (turn left, turn right, stop). The ”go
straight” motor action was implied by lack of activity in the three output units.
The network was trained by having the robot move through a pre-set trajectory and manually
providing desired output values (i.e. whether to turn left, turn right, go straight, or stop) at each
stopping point along the trajectory, along with the preprocessed image acquired at these points.
The ELK1 learning technique was used (Schraudolph 1998a). This is a back-propagation algorithm
with an adaptive gain setting process that improves convergence rates. In addition, shortcut weights
(which are weights connecting the input layer directly to the output layer) were used (Schraudolph
1998b). The use of shortcut weights has the potential to reduce the blurring and attenuation of
the back-propagated error signal, and to reduce time needed to learn the linear component of the
mapping.
During the training and processing of our neural network, we used as input to the network the
projection (dot-product) of the pre-processed (212 element vector) input image with the 50 principal
components. Thus, the neural network input layer had 50 units.
4.5.3
Results to Date
We trained the network with image data taken as the Khepera robot executed a pre-programmed
trajectory within its environment (which can be seen as the line drawn on the ground in figure 2).
As we had not yet implemented a homing routine for the robot, the robot needed to be manually
returned to the starting point of the route after each learning trial. This requirement significantly
slowed the training process and taxed the patience of the experimenter (Regina Mudra). We were
able to carry out 200 training runs. The ”learning curve” of the neural network is shown in figure 3.
Note that the learning error drops quickly in the first 10 or so trials, but slows quickly after that. Our
interpretation of this is that the network quickly learns to ”go straight”, but will take much longer
to learn to turn left or right. This is due to the fact that most of the images obtained by the robot as
it moves along its route are at locations where it is supposed to go straight. There are only a few
images in each training run where the robot is to turn left or right.
31

Range: [−38.2, 24.5]
Dims: [26, 53]
Figure 4.15: Robot’s trajectory
2.4
2.3
2.2
2.1
2
1.9
1.8
1.7
1.6
1.50
1
2
3
4
5
6
Figure 4.16: Learning curve
32

Our conclusion is that the learning does seem to be proceeding, but that more training runs are
necessary. We have plans to continue this work at the ETH lab in Zurich, where Regina will develop
an automated testing process, allowing much more extensive training sessions.
The observed ability of humans to learn routes in less than 10 learning trials (Beusmans et al
1995) is an indication that our approach is perhaps flawed. One could argue, however, that the
human visual system has spent many years and sensed millions of images in learning to categorize
scenes and objects, which our navigation neural network is trying to do from scratch. In some sense,
the recognition of scene views and objects within the scenes is the hard part to learn; associating
these views with motor actions required to execute a route is the easy part. So, one possible con-
clusion is that we should spend more effort on the scene recognition aspect of our problem before
trying to solve the full route navigation task.
4.5.4
References
Bachelder, I.A. and Waxman, A.M., “A view-based neurocomputational system for relational map-
making and navigation in visual environments”, Robotics and Autonomous Systems, Vol. 16, pp
267-298, 1995
Beusmans, J., Aginsky, V., Harris, C., and Rensink, R., “Analyzing situation awareness during
wayfinding in a driving simulator”, Nissan CBR technical report TR 95-4, 1995
Schraudolph, N.N., ”Online local gain adaptation for multi-layer perceptrons”, IDSIA Technical
report 09-98, 1998a
Schraudolph, N.N., ”Slope centering: making shortcut weights effective”, IDSIA Technical
report 32-98, 1998b
4.6
Vision: interfacing a Silicon Retina to a Koala Robot
(Shih Chii Liu and Tobi Delbruck)
We ”modeled” an insect fixation response using a Koala robot and a scanned retina. The idea is
to model the behavior of flies when they try to fixate high contrast stimuli. In our case everything is
highly simplified! We simply wanted to see if a scanned retina with only 300 pixels could be used
for fixation, or perhaps more accurately, line following. We wanted to run everything on the Koala:
the frame acquisition, the cross correlations, and the motor control, to construct an autonomous
demonstration of the use of a small scanned retina on a mobile robot.
The Jorg Kramer 15-21 retina is mounted on the Koala and image frames are acquired by Koala
using the onboard digital I/O lines and ADC, using Mark Blanchard’s code.
Retina frames are cross correlated with 3 full-frame kernels that are tuned to dark vertical bar
stimuli in the left, middle and right parts of the 2-d image. These 3 cross-correlations we think of
as wide-field cells sensitive to high contrast dark stimuli in different parts of the visual field.
Comparisons of these correlations drive the motors directly, bang-bang, depending on whether
the left or right correlations differ by more than a fixed threshold.
Performance in line following is marginal at present, although cross correlations provide fairly
robust information about line position left or right.
We did not get to the interesting part of the project, that is, using the 2-d retina data to recognize
2-d features, for example crossings, to produce more complex behavior.
33

4.7
Neuromorphic Flying Robots
(Nici Schraudolph)
Many birds and insects exhibit energy-efficient, high-performance flight characteristics un-
matched by conventional technology. Through bio- and neuromorphic design of flying machines
we hope to learn more about the aerodynamics and sensorimotor control strategies employed by na-
ture to this end. In addition, such flying robots are the ultimate all-terrain vehicles, with applications
ranging from aerial surveillance to planetary exploration. Their design poses interesting challenges
to neuromorphic engineering:
Bottom-up, low-level approaches to navigation in three dimensions have not yet been widely
explored, presumably due to a lack of (small and cheap) flying robots.
The extreme tightness and criticality of control loops in many flying systems adds much
difficulty to their design.
The severe power and weight constraints these machines operate under call for small, lightweight,
low-power, and highly integrated smart sensors. Now doesn’t that sound just like what aVLSI
is all about?
As a result of these discussions, Mark Tilden, David Nicholson, and myself decided to build a
neuromorphic flying robot during the last week of the workshop. We fitted a helium balloon with a
small motor and propeller, driven by one of Mark’s new miniature oscillator boards. Two directional
light sensors, mounted on top and bottom of the vehicle, modulated the oscillator’s duty cycle so as
to generate a photophobic reflex.
Although this robot had no explicit directional control, a deliberate tilt of its sensorimotor axis,
coupled with the balloon’s inherent tendency towards rotation, resulted in consistent and very “life-
like” light-avoidance responses in three dimensions. We were entertained by its emergent behaviors
such as repeated docking attempts with the (dark) underside of a ceiling lamp.
Our possibilities were limited by the fact that we had to supply power from the ground. Next
year we hope to construct a fully autonomous, battery-operated version. Perhaps we can put smart
silicon sensors (such as Giacomo’s edge tracker) on board? We are also envisioning “Ben Hur”-style
autonomous balloon duels, with pins :::
4.8
Optomotor response with an aerodynamic actuator
(Thomas Netter and Alan Stocker)
This project experimented optomotor response with an aVLSI chip mounted on a hanging ap-
paratus so that it could oscillate freely.
Towards the beginning of the workshop Reid Harrison successfully implemented optomotor re-
sponse on a Koala rover after only a couple of hours of building and programming (see Section 4.10).
The installation of his aVLSI retina was inspired by research on the fly. The rover reacted to left or
right relative contrast motion by orienting itself towards the flow. By correctly setting gains Reid
managed to exhibit tracking behavior.
Reid’s work emphasized the importance of tuning the integration lag to obtain adequate response
of the vehicle whilst minimizing oscillations of the optomotor response. Nevertheless the rover
setup prevents inertial coupling with the optomotor response. We decided to replicate more closely
the setup used in fly experiments.
A cardboard construction supporting :
34



An aVLSI retinotopic motion sensor designed by Alan Stocker,
An amplifier and analog to digital converter,
A Basic Stamp 2 microcontroller to generate pulses for a servo,
A rudder blown by an electrical motor with propeller was hung using a nylon thread (figure
4.17).
The circuit was stimulated by shifting a pattern of black and white stripes (each stripe is about
2cm wide) left and right in front of the motion sensor.
Figure 4.17: The optomotor response setup under the command of its authors.
The setup was only built during the last week of the workshop leaving only time for a demon-
stration which worked at the first try. No tuning was necessary to obtain tracking of the black and
white comb at angular speeds estimated up to 30 deg/s. Angular performance could certainly be
improved by revising the design and using an external power supply instead of the rather heavy
laptop computer battery carried underneath the “fuselage”. An interesting side-effect of this 12V
battery is that a voltage regulator was required to lower the voltage to 5V. But an error in the voltage
regulator documentation induced an unexpected experiment: the whole circuit operated at 12V for a
few seconds without suffering from pyrexia (to put it in Norbert Wiener’s Cybernetics terminology).
Alan’s 26-pixel Smooth Optical Flow linear retina was mounted with an 8mm focal length
lens. Its primary output are two continuous analog signals responding to either left or right motion.
These signals are compared and amplified with op-amp circuitry. The system did not react when
the stripes sheet was shifted at 1m from the lens. This short-sightedness is somewhat analogous
to insect vision. At a closer distance, reactivity with black and white stripes was good but a sheet
with blue and white stripes (drawn with a felt marker) and providing less contrast did not elicit any
response from the retina chip. Alan adjusted several on-chip biases using potentiometers at an early
stage of the construction and it is possible that contrast sensitivity could be improved by readjusting
these biases.
The overall installation turned out to be fairly simple. Only 15 lines of Basic Stamp code were
necessary to average the signal and time the servo pulses at a 50Hz refresh rate. Programming was
done with the help of a laptop and an oscilloscope. Thomas intends to build a lighter setup for a
more systematic and quantitative study of its dynamic characteristics.
35

100K
20K
left
-
LM324
20K
right
NS
+
LSB
ADC
0820
MSB
0.1µF
Retina
100K
Rudder
-
Basic
Servo
Stamp
LM324
LSB
RD
CS
20K
+
MSB
Figure 4.18: Optomotor response circuit.
Thanks: Reid Harrison, Timmer Horiuchi, Dan Lee, Yuri Lopez de Meneses, Philippe Pouliquen, Nicol
Schraudolph, Paul Verschure, Chuck Wilson.
4.9
Visual tracking using a silicon retina on a pan-tilt system
(J¨org Kramer Eduardo Ros Vidal)
The aim of this project was to investigate the possibility of exploiting the parallel preprocessing
performed by an artificial retina to track a moving object in real time using a simple MATLAB
routine to do the remaining processing required.
4.9.1
Experimental setup
A hexagonal silicon retina with a resolution of 125 94 pixels was mounted on a pan-tilt system.
The retinal images were acquired by computer via a framegrabber. Image processing was performed
using MATLAB. The pan-tilt system was controlled by the computer via the serial port.
4.9.2
Algorithm
The retina was operated in a mode where adjacent pixels laterally inhibited each other, such that
edges were extracted in parallel. In a first implementation of a MATLAB routine the retina on the
pan-tilt unit tracked the center of mass of the binarized edge image. This had the disadvantage that
in the presence of multiple objects it would track a point between the objects and that the tracking
would also be sensitive to noise. The routine was then modified to track a blob of moving edges.
This was achieved by convolving the binarized edge image with a gaussian kernel and tracking the
position showing the highest value. A threshold was set on this value in order to avoid tracking of
random noise in the absence of any salient object.
The main distractors from the target were the strong flicker of the a.c.-driven room lighting and
the apparent motion of the background induced by the actual motion of retina on the pan-tilt system.
The response time of the retina had to be tuned to a large value to get rid of the flicker susceptibility.
Insensitivity to background motion could be achieved by acquiring new target positions while the
retina was not moving and after it had adapted out the background edges from the previous motion.
36

The movements of the system thus showed more saccadic than smooth pursuit behavior. In order to
allow fast tracking the retina was thus run at a short adaptation time constant.
4.9.3
Results
With optimum tuning of the retina bias voltages and using the blob tracking routine the system was
able to reliably track the head or a hand of a person walking around the room in the presence of
light flicker and a random background. The maximum tracking rate was about 2Hz. The tracking
algorithm will be expanded to incorporate preference of more central locations and hysteresis to
allow reliable tracking of a continuously moving object in the presence of other moving objects and
to possibly speed up the tracking.
Acknowledgments
Giacomo Indiveri, Alan Stocker and Daniel Lee provided useful help with MATLAB and the pan-tilt
unit.
4.10
Optomotor Response of a Koala Robot with an aVLSI Motion
Chip

(Reid Harrison and Thomas Netter)
One of us (Reid Harrison) brought an analog VLSI motion detector array modeled after the HS
cells in the fly. The HS cells are a class of non-spiking neurons found in the lobular plate of the
fly’s optic lobe. HS cells respond to full-field visual motion induced when the fly rotates about the
vertical axis; they are visual matched filters for yaw. These cells are known to underlie the well-
studied optomotor response – the ability of the fly to null out rotations during flight by producing a
compensatory torque.
We built a robot model of the optomotor response by integrating our visual motion detector chip
with a Koala mobile robot. We used the A/D converter on the Koala to read the output of the motion
chip, which was in the form of a continuous-time analog voltage. We wrote a simple program on
the Koala which integrated the signal from the chip, multiplied this value by some fixed gain, and
sent this value to the motors. The left and right motors were driven with opposite signs to produce
rotation. The robot was driven towards the direction of motion reported by the chip in order to
cancel the perceived motion.
To test the robot, we placed it on a flat sheet of cardboard which was resting on the floor. The
motion detection chip was fitted with a lens, and the lens was adjusted to be parallel to the ground.
The robot was oriented so that it was ”looking” at a cluttered scene about one meter away. This
scene consisted of chairs, cups, fruit, jars, and other typical items found in offices and labs. There
were no ”ideal visual stimuli” such as vertical black-and-white bars. The robot’s implicit task was
to stabilize its orientation relative to these ”distant” visual stimuli while we rotated the cardboard
floor underneath it.
The robot did a good job of stabilizing its orientation as we rotated its floor. We varied the
feedback gain mentioned above and noted the following effects: (1) When the gain was set too
low, the robot did not cancel out all rotation (i.e., there was ”slip”). (2) When the gain was set too
high, the robot would exhibit oscillations once the imposed rotation was halted. These oscillations
showed a fixed amplitude and a frequency of around 1 Hz. It is interesting to note that similar
oscillations have been observed in flies during closed-loop behavioral experiments.
37

Perhaps the most useful thing we learned from this experiment is the need for a system to
quantitatively record robot movement. As we wish to move beyond anecdotal results, we must
record the robot’s position and (especially for this experiment) orientation with reasonably high
precision. We have initiated discussions on various techniques for recording robot movement, and
we hope that next year we will be able to conduct more robot/chip experiments of this type.
4.11
Locomotion of segmented lamprey-like robots
(Asli Arslan, Elizabeth Brauer, Avis Cohen, Steven DeWeerth, David Nicholson, Nici Schrau-
dolph, Mario Simoni, Theron Stanford, Mark Tilden, Thelma Williams
)
The Locomotor Work Group was formed with the intention of building and testing robots which
would move. This objective was met through two robots based on the lamprey, an eel-like fish.
The lamprey is a simple vertebrate with about 100 segments in the spinal cord. The first robot was
built at the workshop with supplies brought by Mark Tilden. This segmented robot had a head plus 8
segments. The DeWeerth lab at the Georgia Institute of Technology brought an 11-segment lamprey
model.
Figure 4.19: Oscillator circuit
The Telluride robot was built using an oscillator and motor attached to a metal ball in each
segment. The oscillator used the configuration in Figure 4.19 with 2 coupled Schmitt triggers,
capacitors, and 1 or more resistors. The resistor value(s) determine the frequency of oscillation.
Two more Schmitt triggers provided the drive to the single motor on the segment. The motors were
scavenged from Macintoshes (floppy eject motors). The metal ball provided contact to the surface;
the motor caused the ball to move back and forth as the oscillator changed phases. See Figure 4.20
for a photo of the robot.
The autonomous robot lamprey project was an attempt to mimic the rough behavior of a planar
lamprey morphology using constrained, non-linear oscillator control. The robot device used stan-
dard TTL control, 47% efficient 216:1 pancake motors driven by standard nervous net (Nv) control
boards, and was held together with silver solder, copper wire (for malleability) and superglue. It had
nine segments shared over eight motors with on-board power, signal generators, and passive visual
sensors in its “head”. The device was built at the workshop, primarily under direction from Avis
Cohen to insure reasonable structural accuracy, as far as could be managed with the mechanical
compromises necessary for an artificial creature.
38


Figure 4.20: Lamprey robot
The first experiment was a determination of the frequency as a function of the segment resistance
using the cross connection configuration with all other segment disabled. The result was f R =3D
5.1e6 with less than 5% variation among the segments where f is the frequency of oscillation and R
is the resistance. The second experiment examined the effect on changing coupling strength from
segment to segment. With ascending coupling stronger than descending coupling, the Telluride
lamprey robot exhibited an ascending traveling wave in segments 5,6,7, and 8. See Figure 4.21.
Unfortunately, the robot developed some circuit problems in the head segment and segments 3
and 4 and never worked well enough again to get data while we were at the workshop. Further
experiments were planned to examine the entrainment from an external signal, coupling the two
lamprey robots, and further experiments in coupling strength.
The DeWeerth Group from Georgia Tech, brought a biologically inspired lamprey model to
compare to the Telluride robot. The DeWeerth Lamprey consists of 11 nearest-neighbor-coupled
segments, where each segment contains a pair of Morris-Lecar type silicon neurons that are recip-
rocally inhibited. The intersegmental coupling consists of ascending and descending excitatory and
inhibitory coupling.=20
The control experiment consisted of using excitatory and inhibitory coupling with the descend-
ing coupling being dominant. With this configuration, phase locking occurs with a total phase lag
of about 100 degrees along the chain. The phase delay between segments, however, is not regular
due to the mismatch of the individual segments.
From this control configuration, all coupling was removed and the oscillators were tuned to have
frequencies that varied less than 5% from each other. To keep this model similar to the Telluride
robot, only excitatory coupling was added. Four experiments were then done by sweeping excita-
tory coupling in different fashions and measuring the outputs of 8 of the 11 segmental oscillators.
The four experiments included: symmetric coupling, ascending coupling only, descending coupling
only, and coupling in both directions, but the descending coupling was dominant. In the symmetric
coupling experiment, as the coupling was increased the frequencies of the individual oscillators in
some segments were changed by as much as 25% but no phase locking occurred. In addition, if the
coupling was increased too much then the oscillations died completely. In the ascending coupling
only experiment, as the coupling strength was increased the oscillators began to phase lock until the
whole network became synchronous and in phase. There was no evidence of phase delays between
segments. The same effects were observed for the descending coupling only case. In the experiment
where the descending coupling was dominant, the end effect of increasing the coupling was depen-
dent upon the absolute level of the ascending coupling. If the ascending coupling was too large, the
network experienced oscillator death. However, if the ascending coupling was small enough then
39

Descending Phase Lag
352
350
348
346
344
342
Descedning Phase
340
338
336
3346
6.2
6.4
6.6
6.8
7
7.2
7.4
7.6
7.8
8
Segment
Figure 4.21: Traveling wave
the network entered synchrony as in the case with only ascending or descending coupling. In either
case, phase delays between segments were not observed.
Figure 4.22 shows the frequency of oscillations for the 8 segments from which data was recorded.
The horizontal axis is the number of the recording channel where 1 is the head and 8 is the tail. The
leftmost bar at each point is the frequency for no coupling whereas the rightmost is for the strongest
coupling. The vertical axis represents the frequency. According to these plots, phaselocking can
only occur if the frequencies of the oscillators are the same at any given coupling strength.
The work group leaders were Avis Cohen and Mark Tilden. The participants were Asli Arslan,
Elizabeth Brauer, David Nicholson, Nici Schraudolph, Mario Simoni, Theron Stanford, Steven De-
Weerth, and Thelma Williams. The group met twice per week to coordinate activities and for techni-
cal discussions by Mark Tilden (locomotion principles in various creatures such as E. coli bacteria,
starfish, and leeches) and Thelma Williams (coupled oscillators). Group members assembled the
Telluride robot under direction from Mark Tilden and conducted experiments.
40

SYMMETRIC COUPLING
20
15
10
freq (Hz)
5
0
1
2
3
4
5
6
7
8
ASCENDING COUPLING ONLY
20
15
10
freq (Hz)
5
0
1
2
3
4
5
6
7
8
DESCENDING COUPLING ONLY
20
15
10
freq (Hz)
5
0
1
2
3
4
5
6
7
8
DESCENDING COUPLING DOMINANT
20
15
10
freq (Hz)
5
0
1
2
3
4
5
6
7
8
Figure 4.22: Oscillator data
41

Chapter 5
Auditory Processing
5.1
Introduction
(David Klein)
This section summarizes the activities of the auditory project group. Auditory project group
members worked on projects encompassing a broad range of problems faced by hearing systems
in real-world environments. These projects included the analysis of informative features in natu-
ral acoustic signals, peripheral auditory processing using electronic VLSI cochleas, fast and robust
computation of single sound-source lateral angle using two acoustical sensors, and identification of
speech in a real noisy environment. Additionally, group members were responsible for the perva-
siveness of auditory themes in other project groups. Collaborative activities resulted in projects such
as the production of spectro-temporal receptive fields using projective-field mapping of address-
events from a 1-D sender array, dynamically learning a tonotopic-like address-event mapping from
a 1-D sender array, and directing a binaural robot towards a sound-emitting target using electronic
cochleas and a fast azimuth estimation routine.
The 1998 Neuromorphic Engineering Workshop proved to be an effective environment for stu-
dents and researchers to become acquainted with and to work on projects inspired by what is known
about biological auditory processing. With a substantial series of auditory-oriented lectures as a
backdrop, members of the auditory project group worked on number of problems relevant to hearing
systems in real, noisy environments. As a result, important steps were made towards implementing
systems with as little as two acoustical sensors and the ability to navigate and communicate using
auditory sense data either exclusively or as a supplement to other sensory modalities.
Members of the auditory project group proper were:
Andreas Andreou, Phil Brown, Didier Depireux, Mete Erturk, Reid Harrison, Dave
Hillis, Tim Horiuchi, David Klein, Shihab Shamma, Jonathan Simon, Leslie Smith,
Nino Srour, Andre van Schaik, and Thomas Zahn.
The activities of the auditory project group can be roughly arranged into three categories:
1. Projects which were concerned mainly with peripheral aspects of acoustical signal processing.
These projects covered issues such as the acoustical input to the system, cochlear filtering,
and peripheral auditory system pre-processing.
2. Auditory localization projects.
These projects involved taking the output of a peripheral auditory system and computing the
location of sound sources in a noisy environment.
42

3. Projects which were concerned with extracting the identity of sound sources, again using the
information provided by an auditory front-end process.
Additionally, there were some fruitful collaborations between the auditory project group and
other project groups. Most productive among these were the collaborations with the Address-Event
Representation (AER) project group, the behaving robots project group, and the visual saliency
project group.
5.2
Peripheral Auditory Processing
In order for a hearing system to use information present in its acoustic environment for relatively
complex tasks such as orientation and recognition, it is necessary for a front-end system to rapidly
extract this information from the raw pressure waves impinging on its sensors and to present it
to more central auditory processes in a compact and clean form. In mammals, these operations
are presumably performed by the ear and the neurons in the cochlear nucleus (CN). In order to
build artificial hearing systems, it is beneficial to both understand what information is available in a
real acoustical environment and to understand the processing that is occurring in the cochleas and
cochlear nuclei of animals.
Towards this end, auditory project group members had access to silicon cochleas, implemented
with analog VLSI technology, and cochlear interface boards, implemented with discrete components
and Field-Programmable Gate Arrays (FPGAs). Additionally, members had access to software sim-
ulations of peripheral auditory processing in MATLAB. In the following sub-sections, the projects
performed with these tools at hand are described.
5.2.1
Analysis of informative features in natural acoustic signals
(Reid Harrison, Dave Hillis, Timmer Horiuchi, David Klein, Shihab Shamma, and Leslie
Smith
)
Prior to constructing a hearing system, it is helpful to know what kind of information is present
in natural acoustic signals. Ultimately, the features which are deemed informative will depend on
the task which is to be performed. The signals may be acquired passively, e.g., by listening, or
actively, as they are in echolocation.
Members of this project group used both traditional signal processing tools as well as models of
peripheral auditory system function to examine and characterize features present in acoustic signals
under the context of several different tasks, such as locating and identifying sound sources. Pre-
recorded speech and musical sounds were examined, as were echolocation pulses generated and
recorded in-house. For example, Figure 5.1 shows the output of a peripheral auditory processing
model implemented in MATLAB with a speech segment provided as input. Project members were
able to identify the different features present in this representation, such as the harmonic and formant
structures. By manipulating these features and inverting the representation back to an acoustic
waveform, project members were able to experience how the different features effect the perception
of the sound.
5.2.2
Auditory processing with electronic cochlea chips
(Andreas Andreou, Phil Brown, Mete Erturk, Andre van Schaik, and Leslie Smith )
It was obvious from the above analysis of natural acoustic signals that many interesting signals
are characterized by a broad and heavily modulated spectral content which is constantly changing
43


Figure 5.1: Representation of a speech segment “come home right away” at the output of
the cochlear nucleus stage of the NSLTools MATLAB Auditory Toolbox. Rows of this “au-
ditory spectrogram” indicate the time-varying activity of neurons tuned to different narrow
frequency bands. Darker areas indicate higher levels of activity.
in time (see Figure 5.1). Thus, it is evidently useful to have access to a continuous measure of the
spectral energy of the received signal. Most animals do have access to this information; in the inner
ear, a cochlea or an analogous structure transduces the acoustic waveform into a topographically
organized pattern of excitation of a population of auditory nerve (AN) fibers. In other words, differ-
ent frequencies in the signal excite different groups of AN fibers. Conversely, a given AN fiber will
only be excited by a subset of the audible frequencies.
The cochlea is hence well approximated functionally as a bank of band-pass filters. However,
implementing the massive filtering operation of the cochlea digitally is a computationally inten-
sive and time consuming task. Thus, it is beneficial to implement such filtering with analog VLSI
circuits. These circuits can be made compact, low-power, and operate in real time.
The members of this project group had access to two analog VLSI cochleas brought to the
workshop by Andre van Schaik and Andreas Andreou. Project members familiarized themselves
with the design of the cochleas and were able to adjust parameters such as the bandwidth of the
filters and the total bandwidth of the filter bank. The outputs of the cochleas were monitored and
recorded as different sounds were presented to the chips. Ultimately, the cochleas were used in a
auditory localization system, to be detailed below. Project members were also familiarized with the
problems associated with implementing the filtering with analog VLSI technology, such as defective
channels and mismatches between different cochleas.
44

5.2.3
Hardware realization of signal normalization, noise reduction, and feature en-
hancement on the output of a cochlear chip

(Phil Brown, Mete Erturk, Shihab Shamma, and Jonathan Simon)
By processing incoming signals with silicon cochleas, it is presumed that the informative fea-
tures in the signals are more effectively evaluated in the spectro-temporal domain. However, the
signals at the output of the cochleas are still far from ideal; They suffer from environmental noise
corruption, offset mismatch across channels and across cochleas, and sometimes all-together defec-
tive channels.
To some extent, biological hearing systems must also cope with such problems, and there ap-
pear to be mechanisms present in the peripheral auditory system to minimize noise and channel
mismatches while enhancing informative features. These mechanisms were reduced to a series of
simple functional descriptions and implemented in hardware with discrete components and FPGAs,
as schematicized in Figure 5.2. These processes were implemented on printed circuit boards and
served as interfaces between the cochleas and a higher-level processor; The data at the output of the
boards were read into a computer, on which higher-level computations, such as those described in
sections below, were performed. Project members were responsible for building the cochlear inter-
face boards and were able to exploit the flexibility of the board design to optimize the performance
of the boards for a given cochlea and a given task.
Figure 5.2: Block diagram of the cochlear interface board
5.3
Auditory Localization
Extracting the position of sound sources in a noisy environment is a well-studied but still insuffi-
ciently solved problem. For this reason, it is beneficial to study how biological hearing systems are
able to successfully perform this task.
There are several cues which can be exploited by a hearing system with two acoustical sensors
in order to estimate the position of a sound source. Important among these are the inter-aural
time difference (ITD) and inter-aural level difference (ILD) which systematically change with the
position of the sound source. Additionally, the pinnae of mammals shape the spectral characteristics
of a received sound in a manner which is sound-source position specific.
45

In mammals, a sub-population of neurons in the CN of each hemisphere project bilaterally to
the Superior Olivary Complex (SOC) where the first computations of sound-source position are pre-
sumed to take place. Analogously, members of the auditory project group had access to the outputs
of peripheral processes made available by the projects elaborated above, and could implement algo-
rithms similar to those presumed to reside in the SOC. Additionally, project group members were
able to compare different algorithms in order to assess which gave the best performance and which
could be most easily implemented (see also Section 4.4).
5.3.1
Computation of sound-source lateral angle by a binaural cross-correlation net-
work

(Phil Brown, Jonathan Simon, and Thomas Zahn)
The lateral (azimuthal) angle of a sound source can be determined by the ITD of the sound
received in two spatially separated acoustical sensors. Not surprisingly, there are neurons in the SOC
which are tuned to specific ITD values. However, the method by which these ITDs are computed
by the system is still largely unknown.
One way to compute ITD is to perform a cross-correlation between the signals received at each
ear. The cross-correlation function C
, is defined here as
(
)
C
Z
sc t si t
dt
(
)
=
(
)
(
;
)
where sc and si are the signals received at the contra-lateral and ipsi-lateral ears, respectively. Ide-
ally, the lag-location of the peak of the cross-correlation function would correspond to the ITD.
The estimate of the ITD is made more robust after the frequency analysis of the cochlea. Now,
an estimate of the ITD can be obtained in each frequency band. The final estimate can be achieved
by simply averaging the results across all frequencies. However, more complicated schemes may
be employed if, for example, there are multiple sources with different spectral emissions.
Project members implemented cross-correlation networks in two ways. The first was by building
a network of model neurons in software, as schematicized in Figure 5.3. The output of each cochlear
filter is passed through a series of delay lines. In the center of the network, model neurons, each
tuned to a specific frequency, receive input from a pair of delay stations, one from each ear. Thus,
depending on which delay stations a neuron receives input from, it will be maximally excited when
the relative delay between the two ears is at a certain value. It was therefore possible to ascertain
the ITD by monitoring the pattern of activity of this central population of neurons.
Because the above network was implemented in software, including the cochlear filtering, there
was a severe limit to the amount of data the network could process and still work in real time. For
this reason an onset detector was implemented before the filtering. When an onset was detected,
signaled by a sharp rise in the signal envelope, a short (
ms:) data sequence was processed by
23
the network. Unsurprisingly, the network performed best for transient stimuli, e.g., hand-claps.
Alternatively, project members tried more traditional signal processing techniques of computing
cross-correlation between the outputs of the two cochleas. The cross-correlation was implemented
in MATLAB using the data read in from the cochlear interface boards. Since the filtering was
performed instantaneously by the cochlear chips, and the cross-correlation operation was faster, it
was possible to process larger data sequences (
ms:) than in the previous scheme. However,
500
the processing time was approximately equal to the data duration. Thus, the system was most
sensitive to continuous stimuli, with transient stimuli being missed about half of the time. An
example of the output of the cross-correlation algorithm for one frame of data is shown in Figure
5.6.
46

Figure 5.3: A cross-correlation network implemented with model neurons for estimating
ITDs. See text for an explanation.
In Figure 5.4 is shown the performance of the non-neuronal cross-correlation network operating
on the outputs of the cochlear chips with a single brown noise source as input. Due to mismatches
between the cochleas, there are systematic errors in the estimated azimuthal angle, especially at
the larger angles (i.e., larger ITDs). However, the estimated vs. real angle curve is monotonic
and saturating, which makes these errors easier to cope with in a control task, as evidenced in the
collaboration with the behaving robots group.
5.3.2
Computation of sound-source lateral angle by a stereausis network
(Phil Brown, Didier Depireux, David Klein, and Jonathan Simon)
Despite the simple concepts involved in implementing a cross-correlation network, there is lit-
tle evidence that these type of networks, employing massive numbers of precisely arrayed delay
lines, are actually implemented in real biological systems. A somewhat more plausible scheme is
a “stereausis” network which compares the spatial disparity of the AN excitation pattern from each
ear. Because the cochlea itself acts as a delay line from the basal to the apical end, central neurons
which compare inputs from AN fibers coming from both ears effectively also act as ITD detectors
within the frequency band to which they are tuned. Such a network is schematicized in Figure 5.5.
47

Figure 5.4: Performance of the cross-correlation network for estimating ITDs
Project members implemented stereausis networks in two ways, both of which used the outputs
of the cochlear chips as the inputs to the network. The simplest implementation involved matrix
multiplying the data coming from the two ears (see Figure 5.6). The rows of the each matrix
were the outputs of the cochlear filter bands (m
), and the columns were the time series
=
32
(n
). The matrices were multiplied such that the result was m by m. Matrix elements along
2000
the main diagonal in essence signal the presence of identical patterns in each ear. Elements off of
the main diagonal signify a spatially (and hence temporally) shifted pattern in one ear relative to
the other. The computation of the matrix multiplication in MATLAB was found to be significantly
faster and algorithmically simpler than the previously implemented cross-correlation computation.
Performance of this stereausis network is shown in Figure 5.7.
A stereausis network was also implemented using model linear threshold neurons in a neuronal
network modeling package called Xmorph, provided by Paul Verschure. A central 2-D grid of
neurons received binaural inputs from the cochlear interface boards in a manner described above.
Again, the center diagonal elements of the network signaled identical patterns in each ear, while off-
center diagonal elements signaled non-zero ITD. In Figure 5.8, the activity of the diagonals of the
network over a period of approximately
milliseconds are shown, for a zero-ITD narrow-band
200
input. Notice that the most consistent activation is along the central diagonal. This is made more
plain by averaging the activity over the entire duration, also shown in Figure 5.8. Peaks off diagonal
are due to the narrow-band nature of the input signal.
5.4
Acoustic Pattern Recognition
Along with the estimation of sound-source location, it may be necessary to gather the identity of the
source and possibly to communicate with the source. These tasks require classifying the features
at the output of the cochlea in a fast and efficient manner. The classification demands a compact
spectro-temporal feature decomposition, which is presumed to occur in the primary auditory cortex
48

Figure 5.5: A stereausis network used to estimate ITD. See text for an explanation.
(AI) of mammals.
The projects concerned with acoustic pattern recognition took the output of a peripheral auditory
process (MATLAB model) operating on noisy speech signals and attempted to classify patterns of
informative features in a fashion similar to that of AI. Towards this end, it was helpful to review
current manifestations of speech identification and recognition systems, how they work, and how
they may be improved.
5.4.1
Identification of speech in real noisy environments using a model of auditory
cortical processing

(Dave Hillis, David Klein, Shihab Shamma, and Nino Srour)
Project members had access to recordings in which there were invariantly multiple speakers in
an extremely noisy environment. Despite the noise, human listeners are able to easily and reliably
detect the presence or absence of the vocalizations. The goal of this project was to develop an
algorithm by which a machine could detect the presence or absence of the speech as reliably.
Towards this end, a software model of the auditory system, provided by Shihab Shamma, was
employed. The software model spanned the processing from the cochlea to the cortex. Especially
useful for this task was the cortical processing stage, in which the different scales of spectral and
49

Figure 5.6: A frame of data from both cochlear interface boards (i.e., “ears”) are multi-
plied to produce a stereausis representation (shown bottom left). The overlaid dashed
line indicates the main diagonal. On the bottom right is the result of the cross-correlation
between the two ears. The overlaid curve is the data averaged over all channels. Both
representations indicate that the sound source is on the left side of the head.
temporal modulations present in the signal are made explicit. By selectively filtering the signal at
the cortical level, spectral and temporal modulations not relevant to human speech could in essence
be ignored. It was found that by filtering the signal as such, the voicing previously embedded in the
noise were enhanced.
5.4.2
Review of current prospects and limitations in speech recognition systems
(Andreas Andreou and David Klein)
Project members reviewed several current speech recognition systems which employ processing
stages inspired by what is known about human speech production and recognition systems. Speech
recognition systems typically employ a peripheral stage in which the input signal is finally reduced
to a short series of spectral features. The output of the peripheral processing is then used as the
input to a feature classifier which, after a training period, is able to classify the series of features as
words. Much of the discussion was concerned with the final stages of the peripheral processing, in
which the feature space is defined. Alternative feature representations were discussed in the light of
50

Figure 5.7: Evaluation of the performance of the stereausis network.
Figure 5.8: Behavior of a stereausis network implemented in Xmorph, excited by a zero-
ITD narrow-band input. (a) Output of the network organized by diagonal number. The
central diagonal (0) signals zero ITD. (b) Time average of the activity shown in (a). Side-
peaks are due to the narrow-band nature of the input.
what is known about the spectral analysis of the mammalian auditory cortex.
51

5.5
Collaborative Efforts
The collaborations between the auditory project groups and the other project groups fell into two
categories. First, there were collaborations which focused on the hardware realization of some of
the auditory processes which had previously relegated to software. The other collaborations were
concerned with fusing the output of the auditory system with other sensory and/or motor systems
with the goal of accomplishing some task, e.g., approaching salient objects.
Details of the collaborations will not be given here; the several projects will only be summarized.
For additional detail, please consult the progress reports of the project groups which these projects
are considered to be a part of.
5.5.1
Production of spectro-temporal receptive fields using projective-field mapping
of address-events from a 1-D sender array

(Timmer Horiuchi and David Klein)
This project was performed as part of the AER project group. Due to the absence of a working
cochlear chip with address-event outputs, it was necessary to use a 1-D retina with address-event
outputs as a substitute for this project. Patterns of excitation along the “cochlea” were substituted
with visual stimuli. For example, sinusoidal spectral profiles were simulated by focusing images of
sinusoidal gratings onto the focal plane of the chip.
The goal of this project was to produce meaningful spectro-temporal receptive fields in a re-
ceiver chip. This was to be accomplished by manipulating the address projections from the sender
to the receiver. A given sender address could be projected to a number of receiver neurons at dif-
ferent times. Received address were either excitatory or inhibitory. Excitatory addresses were to
increase the stored charge at the receiver neuron by one unit. Inhibitory addresses prevented the in-
crease of stored charge for a short time by blocking the reception of other excitatory addresses. The
receiver neurons were leaky, so that the charge would continually decrease if no additional charge
was added. If the stored charge on a given neuron exceeded a set threshold, that neuron was signaled
as “active”. The patterns in space and time with which the addresses were projected were identical
for all sender neurons and determined which spatio-temporal patterns on the sender would activate
neurons at the receiver. Actual data from the 1-D retina chip was used to simulate the projective
field mappings in MATLAB.
5.5.2
Directing a binaural robot towards a sound-emitting target
(Phil Brown, Mete Erturk, David Klein, Paul Verschure, and Thomas Zahn)
This project resulted from a collaboration between the auditory project group and the behaving
robots project group. The two localization projects, detailed above, were alternatively used as the
“brains” of the Koala robots. The task performed by the robots was simply to locate and approach
sound sources. Both robots were equipped with a pair of small microphones as their only sensors
(on the second robot, the microphones were embedded in an attached dummy head as depicted in
Figure 5.9). Typically, the results of the lateral-angle estimation were mapped with some gain to a
turning velocity or turning duration. Due to the differences between the two localization algorithms,
one robot would approach transient targets, e.g., hand-claps, while the other robot would approach
targets which continually emitted sounds, e.g., music or noise. In the second robot, both the cross-
correlation and stereausis methods of estimating sound-source lateral angle were employed. In
all cases, the robots performed admirably. Videos of the robots in action were recorded and are
available.
52


Figure 5.9: Photo of robot with binaural head orienting itself towards a sound-emitting
speaker
5.5.3
An auditory complement to visual saliency
(David Klein, Max Rauschaber, and Paul Verschure)
This project resulted from a collaboration between the auditory project group and the visual
saliency project group. A motor program to control a Khepera robot to move towards visually
salient objects was implemented. The goal was to provide an auditory complement to the visual
saliency maps. For example, sound sources could be localized, and visual targets at those locations
could be enhanced. More intense auditory saliency could be assigned to objects emitting louder
sounds and/or sounds which are changing more rapidly.
5.5.4
Making Pinna Casts
(Timmer Horiuchi, Andre Van Schaik)
In an experiment to measure the spectro-temporal transfer characteristics of human pinnae, we
began an effort to make plaster casts of Timmer’s ears. Using a “body-parts” casting kit, left and
right pinnae were successfully cast, however, some of the details of the concha and ear canal were
difficult to preserve due to the protective ear plugs that were worn to protect the ear drums. Unfor-
tunately, we ran out of time and the transfer characteristics have not yet been measured because the
detailed carving required to recreate the concha and ear canal has not been finished. We hope to
finish some of the measurements in Prof. Shamma’s laboratory in the coming months.
5.6
Retro- and Pro-spectives
Auditory project members were largely successful in their endeavors to study and implement hear-
ing systems with the ability to locate and identify relatively simple sounds in noisy environments.
Partially, the success can be attributed to the work done prior to the workshop. Cochleas and
cochlear interface boards were fabricated and tested and some projects were conceived before the
start of the workshop. Additionally, the group benefited from a relatively strong showing from
53


Figure 5.10: Photo of the Timmer’s left pinna. Using a plaster-cast system, both pinnae
were cast with the intention of recording their spectro-temporal characteristics.
the auditory community. There were more lectures and personnel present than there had been in
past years. This aided in the development of additional projects that were conceived during the
workshop. This year also saw a relative abundance of auditory themes pervading into other project
groups’ projects.
Although activity on some of the projects rose and fell quickly and other projects were merely
discussions, it is certainly encouraging to reflect on the connections established between all of the
projects detailed in this report, both within the auditory project group and in collaboration with
other groups. The different projects spanned signal theory, analog filtering, feature enhancement
and noise suppression, tonotopic projection, spectro-temporal receptive field generation, sound lo-
calization, feature classification, multi-sensory fusion, motor action, and more. As work on these
projects continues both inside and outside the workshop, project group are likely to make significant
strides towards implementing a complex and effective biologically inspired auditory system.
Many project group members are committed to continuing the work started at the workshop.
Many of the projects will be continued by collaborations between the members of University of
Maryland, College Park (UMCP) and Johns Hopkins University (JHU): Pamela Abshire, Andreas
Andreou, Phil Brown, Marc Cohen, Didier Depireux, Mete Erturk, Tim Horiuchi, David Klein,
Shihab Shamma, and Jonathan Simon. Meetings are expected to be held once a month and the
following projects will be continued: Auditory Localization, Speech Recognition, AER Production
of Spectro-temporal Receptive Fields, and AER Production of Tonotopic Mapping. Discourse be-
tween the UMCP group and Thomas Zahn has continued, and continued work between the UMCP
group and Paul Verschure of University of Zurich is likely.
Primary among the topics not sufficiently addressed this year is the trouble of dealing with
multiple sound sources. The algorithms implemented in the workshop were structured so that the
loudest or most salient sound source wins. This is an insufficient solution in general, as there
are likely to be multiple sources of interest to a real system. Additionally, a particular source
of interest may be partially masked by louder distracting sources. The problem of disentangling
sound sources will necessitate integrating source separation, saliency, recognition, localization, and
attention systems in future workshops.
54

Chapter 6
Address Event Representation
6.1
Introduction
(Kwabena Boahen, Timmer Horiuchi)
The address-event workgroup focused its energy on systems-level projects that utilized existing
chips for interface to either the Koala robot platform or to the laboratory PCs. This year we had
projects involving both 1-D and 2-D retinas as well as development in the serial AER protocol.
More of the focus this year was on the interface into other systems and how the data would be used,
in contrast to previous years where the focus tended to settle on making the chips function properly
under non-ideal conditions. It should be noted that the Telluride environment can be a notably
harsh one, leaving the experimenter to deal with wildly changing conditions in humidity (from rain
to wicked, static-generating dryness), temperature, and lighting (strong sunlight to dim, flickery
fluorescent bulbs). Projects this year ranged from multi-chip 1-D stereo to 2-D AER remapping.
6.2
AER-based 1-D Stereo Work Group
(Alan Stocker, Yuri Lopez De Meneses, Charles Wilson, Tim Horiuchi, Alberto Pesavento)
In this group, our goal was to combine two 1-D AER transmitting vision chips (designed by Tim
Horiuchi) in order to create a 1-D stereo system. The group split into two subgroups, one which
focused on stereo algorithms and one which assembled and tested the hardware interface.
Tim Horiuchi had brought one working vision board that sent spikes to a PC parallel-port for
the recording of data. A second board was built and tuned to provide outputs similar to the first
board. The two AER output streams were merged using a PIC microcontroller (Microchip Inc) and
new software was written to accept the stereo data. A set of recordings of various static and moving
stimuli were made for use in the software subgroup’s simulation. Figure 6.1 shows the two boards
viewing a pair of low-contrast targets.
Alan Stocker and Yuri Lopez De Meneses worked on a software model of stereo-correspondence
proposed by Misha Mahowald. They first worked with data from a one-dimensional retina mounted
a Khepera robot and then later switched to data acquired from the AER-based vision system.
In this model of stereopsis, multiple binocular images from different feature detector arrays are
fed into correlator arrays which compute the stereo disparity matches for each cyclopean angle.
For each cyclopean angle there is an analog cell which receives activity from each of the disparity
cells in and reports the weighted-average disparity in each column (cyclopean angle). There are
two mechanisms that are used to resolve conflicts when there are several possible disparities at one
location: positive feedback and a winner-take-all (WTA) circuit.
55


With positive feedback, the closer the position (disparity) of a correlator cell is to the analog
cells output value (average) the higher is the feedback gain. By introducing one inhibitory cell per
column, the positive feedback and the WTA mechanism ensure that false matches get suppressed.
The analog cells are necessary to enable interactions with neighboring correlator columns. This is
essential to fulfill the constraint of smooth disparity changes across space. The positive feedback
and the analog cell also make sure that no strong outliers win by performing a centroid computation.
Figure 6.1: Photo of the stereo vision setup using two one-dimensional AER-based im-
agers. Each chip transmitted the location of spatial derivatives and motion signals at each
pixel location. These signals were transmitted to a computer using asynchronous spike
trains in a rate-coded manner.
6.3
Line-Following Robot Using an Address-Event Optical Sensor
(Philip H¨afliger and Tim Horiuchi)
The principal aim of this project was to interface an optical aVLSI sensor chip with address-
event (AER) output to a 16MHz 68C331 Motorola processor that controls the Koala robot (K-Team,
Lausanne). The Koala was then programmed to perform a simple line following task.
Address-event representation (AER) is becoming a popular method to interconnect spiking hard-
ware devices. Especially in neural models which require a large number of point to point connec-
tions that are sparsely used, this time multiplexing approach is well matched.
Therefore the optical sensor chip used here produces AER output. It computes the spatial deriva-
tive of the light intensity on a horizontal line, it detects vertical edges. Its resolution is 40 pixels.
It also represents the relative motion of the edges on two additional 40 pixel 1 dimensional arrays,
each coding one direction. The later two arrays were not used in this project.
Performance in line following can be improved by using adaptive aVLSI optical sensors, that
are able to reliably detect edges in various lighting conditions and so spare the processor computa-
tionally expensive preprocessing. This has already been demonstrated last year in this workshop. In
order to test the applicability of the AER communication in this configuration, we reproduced the
experiment that had been done using an optical chip with scanned analog output.
The AER interface on Koala was realized using interrupt driven handshaking. The optical chip’s
request line triggers an interrupt on Koala, which then reads in the address on the AER bus and
acknowledges the reception. Following the protocol the chip resets its request line, then Koala
resets its acknowledge signal. The address points to a position in the optical array and the value at
56

binmot5.std
250
"binmot5.std"
200
150
address 100
50
1
0
0
5
10
15
20
25
time [s]
R
L
2
1
40
+
+
disparity
disparity
-
-
3
cyclopean angle
cyclopean angle
Figure 6.2: (1) Recordings from the binocular system of 1D AER retinas. The address
space is divided for the two retinas, each providing three different features for each image
location: spatial-intensity gradient, rightward motion and leftward motion. (2) Raw left and
right retinal images, representing the temporally integrated spike train (over 500 msec:
9.0 sec ¡ t ¡ 9.5 sec) from the recording in Figure 1. Note that the input dimension matches
the number of features, 3x40. Since the spike frequency of the motion sensitive neurons
is relatively low, the motion features are almost not visible in the grayscale representation.
(3) Here, activity of the correlator array on the left shows cells responding to particular
disparities and cyclopean angles due to the retinal input. On the left, local interconnections
are disabled and false targets appear. On the right, the positive feedback to the disparity
cells (not shown) as well as the winer-take-all along columns of the same cyclopean angle
and the competition between winning cells in columns of different cyclopean angles takes
place. As a result, only the true target matches remain, indicating two objects lying almost
in the same, slightly positive, disparity plane.
57

KOALA processor
AER optical
AER
PID
WTA
motors
aVLSI chip
interface
controller
environment
Figure 6.3: System block diagram of the tracking system. On the left, a 1-D AER-based
vision chip feeds edge and motion detection information to the Koala robot platform via
address-events. Software on-board the Koala performs tracking and directs the motors.
that position gets incremented. After that the interrupt handler gives up control.
At regular time intervals, interrupts are disabled and another procedure reads the address his-
togram, updates the motor output, resets the optical array and enables interrupts again. To compute
the new motor settings a winner takes all algorithm with hysteresis is run over the address histogram.
A PID-feedback controller then computes the motor settings to correct deviations from the center.
The feedback circuit is depicted in figure 6.3
The robot performed well; we let it track a light gray cable on a gray carpet at various daytimes
and lighting conditions. The robot sometimes lost tracking in sharp curves. This can be blamed on
the mounting of the optical chip on koala. The field of view was set too far ahead. In sharp curves
the robot corrected its direction too early and the angle of the cable and the robots observation line
diminished substantially, which made the edge detection increasingly more difficult, up to a point
where it failed completely. That problem could be solved by focusing the optical chip to a line just
in front of the robot. Shallow curves were followed reliably.
6.4
2D Address-Event Senders and Receivers: Implementing Direction-
Selectivity and Orientation-Tuning

(Kwaben Boahen, Masahide Nomura, Eduardo Ros Vidal, and Rufin Van Rullen)
Description: Orientation-tuned and disparity-tuned binocular receptive fields were implemented
in previous workshops. This year, we sought to implement direction-selective receptive fields. A
new retinomorphic chip—developed by Kwabena Boahen—made it possible to implement this mo-
tion computation without using axonal or cortical delays. Thus, the receiver chips and projective-
field processors used in previous workshops proved adequate for the task. The motion algorithm,
the retinomorphic chip’s outputs, and the direction-selective cell’s responses are presented in the
figure.
We implemented three of the four DS cell-types using a 3x64x64 neuron address-event receiver
board, with a pair of 5MIPS microcontrollers (Microchip PIC16X57) computing projective fields for
each receiver chip in real-time. The microcontollers simulated a virtual receptive field by copying
and offsetting address-events from the retinomorphic chip. They sent these remapped address-
events to the receiver chips, which generated excitatory post-synaptic potentials and performed
leaky integration (each chip has 64x64 diode-capacitor integrators). The response of cells on the
58

three receiver chips were displayed on an RGB monitor. Each DS cell received inputs from 4x2x5
ganglion cells and its receptive field spanned 160 photoreceptors (there is a factor of 2 subsampling
at the ganglion cell level).
Our preliminary results show that this two-dimensional, multichip, motion architecture has
promise. They also demonstrate the felixibility and utility of virtual receptive and projective fields
based on the address-event representsation—when the computational primitives are encoded as
spike trains. The system will be exhaustively tested, characterized, and improved over the coming
year at the University of Pennsylvania. We realized the need for faster projective field processors
and we plan to explore FPGA-based and custom-VLSI solutions.
In the second half of the workshop, we attempted to self-organize these orientation-tuned,
direction-selective receptive fields (i.e., learn the mapping from retina to cortex for these four-cell
types automatically). Since no cortical delays are required, we believed it would be possible to
achieve this simply by wiring together neurons that fire together. We designed and implemented a
simple axonal growth mechanism on the microcontrollers, loosely based on activity-generated dif-
fusible agents (e.g. nitric oxide) and chemotaxis. Unfortunately, we did not have enough time to
debug the code. We plan to debug and test the algorithm at the University of Pennsylvania and hope
to report success at next year’s workshop.
6.5
One-Dimensional AER-based Remapping Project
(Marc Cohen, Tim Edwards, Theron Stanford, Gert Cauwenberghs, and Andreas Andreou,
Pamela Abshire
)
In this project, we continued an effort begun at last year’s Telluride meeting (see ”Adaptive
Address-Event Router and Retinal Cortical Maps” in the 1997 report) to demonstrate an adaptive
routing mechanism for address-events between a sender and receiver, modeling the formation of
tonotopic maps from the cochlea up to auditory cortex or alternatively, topology-preserving maps
from retina to visual cortex. The goal was to dynamically reorganize the sender-to-receiver address
map, which was initially random, so that the final map preserved the spatial regularity of the sender
chip. The constraints of the re-mapping operation were that only the activations of the receiver chip
could be observed. Furthermore, only short sequences of activations could be held in memory prior
to making a re-mapping decision. Re-mapping algorithms were first tried in software and then an
attempt was made to implement them in hardware using PIC controllers as the mapping devices.
The objective this year was two-fold: to explore further the biological plausibility of the learning
scheme, and to implement the learning algorithm on a microcontroller that transforms addresses of
events in a neuromorphic vision system. A PIC microcontroller-based board was built (PIC17C44
and PIC17C43) and programmed, however, a number of persistent hardware bugs prevented the
system from being completed. A one-dimensional AER-output imager chip that outputs salient
features based on motion and contrast was prepared for interface. We did test a number of new
algorithms on some of the real-time data that was recorded from Tim Horiuchi’s 1-D AER retina.
The project will be continued at Johns Hopkins University.
6.6
Simulating AER-Cochlear Inputs With the 1-D AER Vision Chip
(Tim Horiuchi, Mete Ert¨urk, David Klein)
In this project, we were attempting to use the 1-D AER vision chip supplied by Tim to simulate
a silicon cochlea with AER outputs. Our primary effort was placed in attempting to simulate the
detection of pitch.
59


Figure 6.4: (a) Four types of direction-selective (DS) receptive fields—two directions of
motion times two signs of contrast. Four types of retinal ganglion cells provide input to
these DS cells: (i) ON-sustained (labeled ’++’ or green). (ii) OFF-sustained (labeled ’–’
or red). (iii) ON-transient (labeled ’-+’ or yellow). (iv) OFF-transient (labeled +- or blue).
(b) Raster plots and histograms of spike trains from retinomorphic chip that models these
four retinal ganglion cell types. We showed the chip vertical black and white bars moving
horizontally. We recorded spike trains from 4x52 neurons lying on a vertical line (Column
26 of the array). We used 20ms bins to create the histogram; the neurons spiked at a
mean rate of 5Hz. (c) Responses to motion in the preferred and null directions for one type
of DS cell over a range of speeds. The response was defined as the peak deviation in the
cell’s membrane voltage; 16 stimulus-synchronized records were averaged. The cell was
direction-selective for speeds spanning at least one decade.
60

Figure 6.5: (a) The remapping problem: Initial connections between the receiver (depicted
here as a visual processing chip receiving images through a lens) are neither one-to-one
nor topographic. Our goal state is where topography is preserved from the sender to
the receiver. (b) The hardware architecture we are constructing to simulate the learning
problem. An AER-based sender chip generates events on the AER bus which is received
by a microcontroller-based look-up-table (LUT) which monitors incoming events and re-
routes them according to the LUT. A training phase is to be generated, where activity on
the sender array is moving continuously in space, providing the microcontroller with data
to rearrange the LUT entries based on the statistics of the training data.
A spatio-temporal pattern was created that would produce AER outputs that would be expected
from a silicon cochlea receiving a pure tone as input. This pattern was shown to the 1-D retina and
the resulting spike trains were recorded. In particular, we were interested in recording the responses
from the motion detector units which are tuned to slow motions of a particular direction. This type
of response is consistent with what might be detected from the cochlear outputs. By detecting the
location where the pressure wave (and thus the hair cell outputs) slows down (where the phase of
the wave changes rapidly along the length of the cochlea), the frequency of pitch is determined.
Although we created and recorded from simulated spectral-ripple stimuli, we did not have
enough time to incorporate this data into any simulations.
6.7
Serial Address-Event Representation
(Philippe Pouliquen)
This report summarizes the work done at the 1998 Telluride Workshop on Neuromorphic Engi-
neering on Serial Address-Event Representation.
The Address-Event Representation work-group (led by Timmer Horiuchi) was subdivided into
many smaller work-groups, one of which was the Serial Address-Event Representation (SAER)
work-group. The participants of this work-group were Andreas Andreou (The Johns Hopkins Uni-
versity, Baltimore, MD USA), Philippe Pouliquen (The Johns Hopkins University, Baltimore, MD
USA), and Peter Stepien (University of Sydney, Australia).
The task that we assigned ourselves was to design and prototype some of the basic building
blocks needed for SAER systems. The basic building blocks of interest were: a computer interface
card capable of sending and receiving SAER packets, a universal SAER routing block, and converter
blocks for converting between conventional AER and SAER circuits.
This report begins by describing the salient features of SAER, followed by a description of each
of the three building block projects we worked on.
61

6.7.1
SAER Basics
Like conventional Address-Event Representation (AER), SAER is an electrical specification for
inter-chip communication that makes extensive use of asynchronous circuits.
However, conventional AER is a parallel bundled data asynchronous protocol. This means that
it requires many wires to send data bits in parallel plus two extra wires for handshaking as shown in
Figure 6.6.
-
-
-
p
p
p
Data
p
p
p
-
-
-
Sender
Receiver
Request -
Acknowledge
Figure 6.6: Example of a conventional AER inter-chip communication wiring.
The time relationship between these signals is shown in Figure 6.7. The sender begins by setting
the data lines to their appropriate state and then asserts the Request line to indicate to the receiver
that there is valid data on the data lines. When the receiver has latched the data, it asserts the
Acknowledge line to indicate to the sender that it has read the data. The sender is now free to stop
driving the data lines, or change their states. In any event, the sender then de-asserts the Request
line, whereupon the receiver de-asserts the Acknowledge line, and the cycle may begin again. (Note
that it is possible to shorten the cycle length by transmitting data on each transition of the Request
line).
Data
valid
valid
Request
Acknowledge
-
time
Figure 6.7: Example of a conventional AER inter-chip communication signaling. Two data
words are sent consecutively using a four phase handshaking protocol.
In contrast, the basic SAER is a true delay insensitive asynchronous protocol which uses a
(at least) four wires as shown in Figure 6.8. The data bits are transmitted serially to reduce the
complexity of the circuits and the number of interconnect wires.
The time relationship between these signals is shown in Figure 6.9. The sender begins by
asserting either the True or the False line (depending on the value of the first bit to transmit). When
the receiver has recognized the transmitted bit, it asserts the Acknowledge line. The sender then de-
asserts the True or the False line (depending on which one had been previously asserted), whereupon
the receiver de-asserts the Acknowledge line. The cycle then repeats until all the data bits have been
transmitted. At this point a special cycle occurs using the Stop line. This line is used analogously to
the start/stop bits used in other serial protocols to indicates to the receiver that the entire data word
has been transmitted.
62

True
-
False
-
Sender
Stop
Receiver
-
Acknowledge
Figure 6.8: Example of a serial AER inter-chip communication wiring.
True
False
Stop
Acknowledge
-
time
Figure 6.9: Example of a serial AER inter-chip communication signaling. The data being
transmitted is 1011.
In practice, most of the Address-Events that we are interested in transmitting contain separate
X and Y coordinate words, and an additional cell type or magnitude word. We have therefore
augmented the four wires with an additional two sub-packet delimiter signals to simplify some of
the processing that we need to do.
The resulting wiring is shown in Figure 6.10. An example of the time relationship between
these signals is shown in Figure 6.11. Note that the start/stop bit functionality has been folded into
the last sub-packet delimiter signal.
True
-
False
-
X
-
Sender
Y
-
Receiver
Z
-
Acknowledge
Figure 6.10: Example of the serial AER inter-chip communication wiring.
6.7.2
Pros and Cons of SAER
There are three principal advantages to using serial rather than conventional AER for inter-chip
communication.
63

True
False
X
Y
Z
Acknowledge
-
time
Figure 6.11: Example of a serial AER inter-chip communication signaling. The data being
transmitted is X=011, Y=110, and Z=101.
First of course, is that SAER uses less wires than conventional AER in most cases. This means
that building large systems with many VLSI chips all inter-communicating with SAER is feasible,
whereas the conventional AER quickly becomes a wiring nightmare. Furthermore, each VLSI chip
can have many more SAER ports, so that it can communicate directly with many other VLSI chips
before package pins become a limitation (Note that this argument assumes that the parallel bundled
data line in the conventional AER are not shared between ports). Finally, SAER can be used to
communicate over long distances as well, because only a small number of special line drivers would
be needed.
Secondly, SAER is truly delay insensitive. In other words, regardless of the propagation delay
differences between each line, the transmitted data will not be corrupted. In contrast, the conven-
tional AER requires that the data lines be stable and valid at the receiver before the Request line is
asserted. Therefore, if the propagation delay along one of the data lines is greater than the prop-
agation delay along the handshaking line, the receiver may latch corrupt data. Conventional AER
therefore requires much more careful VLSI chip layout and board level assembly to insure data
integrity, and worst case timing must always be assumed in calculating signal speeds.
Thirdly, the SAER packet format is very flexible. Any number of True or False bits and sub-
packet delimiters can be sent before the final Stop delimiter for the packet to be valid. For instance,
we have designed special SAER routing blocks to merge or split SAER data streams which work
independently of the SAER packet size, and therefore do not need to be redesigned each time the
packet format changes! In contrast, one cannot easily interconnect conventional SAER VLSI chips
that have differing numbers of data lines.
Each advantage of SAER also has a corresponding disadvantage (this is the nature of design
trade-offs).
By reducing the number of wires, we have made the sending and receiving circuits that much
more complicated. To make SAER work, we need to introduce asynchronous shift registers, FIFOs,
multiplexors, etc. However, we feel that the additional circuitry is a small price to pay for the
increased robustness and flexibility of SAER.
Also, by switching to a serial protocol instead of a parallel protocol, we have seemingly slowed
down the transmission rate of address-events. However, we have eliminated the time overhead in
conventional AER which forced us assume worst case timing: for instance, where conventional
AER may require us to assert Request at least 100ns after driving the data lines to insure data
integrity, SAER has no such built-in overheads. Furthermore, since we have fewer lines to drive,
64

we can dedicate more VLSI silicon area to the pad drivers, and therefore drive the VLSI chip pins
at a higher rate. We therefore expect to end up with a SAER packet rate at least comparable to
conventional AER.
6.7.3
SAER cabling standard
We have chosen to use conventional 100baseT ethernet wire and connectors for our SAER interface
cable, as it is easily obtainable, and the wire itself has already been standardized.
This wire uses four twisted pairs (or eight wires), of which we use one wire for each of True,
False, X, Y, Z, Acknowledge, and signal ground. The eighth wire (called the Request line) is
driven by each sender to the logical OR of True, False, X, Y, and Z. This allows us to terminate
a sender’s port by tying the Request line directly to the Acknowledge line. A receiver’s port is
terminated by tying all the lines except Acknowledge to ground. This was done to allow us to
remove any component of a SAER system by appropriately terminating the ports that the component
was connected to.
A diagram of the SAER plug pin-out —(as found on the end of each cable) is shown in Fig-
ure 6.12. Note that the non-terminating sub-packet delimiters are on the outside edge, so that
cheaper 6 conductor wires can be used when the sub-packet delimiters are not needed.
X Z T A G F R Y
Figure 6.12: SAER plug pin-out. T: True, F: False, R: Request, A: Acknowledge, G: Signal
Ground.
6.7.4
Project 1: The SAER computer interface board
Before building any SAER VLSI chips or boards, we needed a method of synthesizing and capturing
SAER packets. Although a digital signal analyzer can be used to capture single SAER packets
between a sender and a receiver, it cannot totally replace the receiver (because it doesn’t generate an
acknowledge signal), and it cannot operate at high throughputs (usually because of limited memory
capacity). We have therefore designed and built a SAER transceiver with a built-in IBM PC/ISA
interface based on a single ACTEL FPGA.
Although the board had been built prior to the Telluride Workshop, the initial ACTEL FPGA
prototype wasn’t functional. The control circuitry was redesigned with the help of another Work-
shop attendee, Kwabena Boahen (University of Pennsylvania, PA USA). A new FPGA was burned
at the workshop and tested by connecting sender and receiver ports to each other. The new FPGA
was found to be functional except in the case where a long (greater than 10 feet) coiled wire was
used to connect the two ports (if the wire was stretched out, the transceiver worked fine).
The packet format used by the transceiver is fixed and is as follows: a 4 bit magnitude portion,
followed by two 6 bit coordinates. When the sender port was terminated, we found that the packet
65

length was on the order of 6 microseconds. We expect to improve on this in the future by using
faster FPGAs, or multiple FPGAs.
6.7.5
Project 2: The SAER universal routing block
The Address-Event protocol is essentially a point-to-point protocol. That is, each sender connects
to one and only one receiver. However, in a AER system, we may want to simultaneously send the
same packet to multiple receivers or conversely, merge the multiple packet streams into one.
For instance, we may have a single silicon retina, but multiple feature detector chips (such as
motion detector chips). In order to connect the silicon retina to the motion detector chips, we need
a routing block called a broadcast block. Now suppose also that one of the motion detector chips
detects bright-dark edges moving to the left, while another motion detector chips detects dark-bright
edges moving to the left. If we want to detect all edges moving to the left, we need to combine the
output of the two motion detector chips using a routing block called a join block.
Alternatively, suppose we want to use a new silicon retina which has four different types of cells
in each pixel, with older receivers. The first two bits of each SAER packet indicate the cell type,
and we want the activity of each cell type to go to a different receiver. To do this, we use a block
called a split block, which examines the first few bits of each packet, strips them off, and sends the
remainder of the packet to only one of its output port depending on the pattern of the first few bits.
Similarly, a merge block joins two or more packet streams and prepends bits to each packet that
indicates where the packet came from.
One of the targets of current SAER development is to design and implement each of these four
block. We therefore designed a circuit suitable for an ACTEL FPGA which could implement any
of the four blocks (depending on the state of two configuration pins). Furthermore, each ACTEL
FPGA can hold three of these circuits, allowing a wide variety of SAER topologies.
We did not program any FPGAs, because our simulations showed that one of the four configu-
rations (merge) was not operating properly. This circuit will be further examined at the University
of Sydney.
6.7.6
Project 3: The SAER-to-AER converter blocks
Once the computer interface card was completed, the FPGA circuit was split in two to produce
separate parallel-to-serial and serial-to-parallel converter chips. We assembled two boards (one for
each FPGA), containing one FPGA socket, a 50pin connector of the type used on conventional AER
boards, and one SAER socket.
There was some confusion at this point about the pin-out of the 50pin connector. As shown in
Figure 6.13, the pin-out depends on what you connect to. The silicon retina and receiver boards built
by Kwabena Boahen have a single active high Request line, and a single active high Acknowledge
line. However, in the past we have use a National Instruments LAB-NB board (which emulates an
Intel 8255 I/O chip) as either a sender or a receiver. Because the LAB-NB board uses two distinct
8 bit ports for the data portion of the AER protocol, it has two Request lines and two Acknowledge
lines! Furthermore, depending on whether it is sending or receiving, it uses sometimes completely
different lines for handshaking. And finally, most of the time, the LAB-NB signals are active low.
There is therefore three different possible pin-out for the 50pin connector, and each FPGA has two
configuration bits so that the user can specify which pin-out should be used.
We burned the FPGAs, interconnected the two boards with a short piece of ethernet wire, and
attempted to use the setup between a working silicon retina (sender) board and a working receiver
board built by Kwabena Boahen. Unfortunately, this setup did not work, and there was insufficient
66

silicon
silicon
LAB-NB
LAB-NB
LAB-NB
LAB-NB
retina or
pin
retina or
as sender
as receiver
as receiver
as sender
receiver
receiver
1
2
3
4
5
6
7
8
9 10
11 12
GND
GND
GND
13 14
PA0
PA0
Y0
Y1
PA1
PA1
15 16
PA2
PA2
Y2
Y3
PA3
PA3
17 18
PA4
PA4
Y4
Y5
PA5
PA5
19 20
PA6
PA6
Y6
Y7
PA7
PA7
21 22
PB0
PB0
X0
X1
PB1
PB1
23 24
PB2
PB2
X2
X3
PB3
PB3
25 26
PB4
PB4
X4
X5
PB5
PB5
27 28
PB6
PB6
X6
X7
PB7
PB7
29 30
Acknowledge
Acknowledge
Request
31 32
Request
Ac
kno
wledge
33 34
Request
Acknowledge 35 36
Ac
kno
wledge
GND
Request
Request
37 38
GND
39 40
41 42
43 44
45 46
47 48
+5V
+5V
49 50
GND
GND
GND
Figure 6.13: Conventional AER 50pin connector pin-out.
time to narrow down the problem. These boards will be further examined at The Johns Hopkins
University.
6.8
Serial AER Merger/Splitter
(Peter Stepien)
The Address Event Representation (AER) bus is an emerging standard used for communicating
between neuromorphic chips. The need has arisen due to the limited number of pins available on
integrate circuit (IC) packaging. An AER bus is used for point to point communication of the
address of a neuron which has spiked. This is carried out in real time so that only the address needs
67

to be transmitted. The address is conveyed as a parallel word of sufficient length. A serial version of
the bus has also been proposed which further reduces the number of pins required with a reduction
in bandwidth.
The serial version also has the provision to allow for breaking the address sent into three sub-
groups: X address, Y address and a Z address. The X and Y address refers to two dimensional data
such as a silicon retina. The Z address refers to the type of data that is being sent. Typically the
addresses are sent in reverse order, in the sequence Z, Y and X.
Since the AER bus is point to point, there are no implicit ways to connect a number of devices
into one. Some ‘glue’ logic is required. The project undertaken was to design in digital hardware
four building blocks for connecting a number of neuromorphic chips together. The four blocks are:
1. Merge two streams into one.
2. Merge two streams into one and add an extra address bit to indicate which path it came from.
3. Split a single stream into two.
4. Split a single stream into two based on the value of the first address bit in the message after
removing the first address bit.
These building blocks could be used to merge or join a number of neuromorphic chips together.
An example is where two silicon retinas may have their outputs merged into one stream to go into
another chip for further processing. The address bit used for steering the data is the Z part of the
address.
The four building blocks were designed and simulated using diglog. They were then combined
into one design where two pins are used to select which of the four functional blocks is required.
The final implementation was an ACTEL A1020B FFGA. This chip has sufficient pins and logic
capacity to include three copies of the design all of which can be programmed individually to per-
form one of the four blocks. Having three in one device could be used to merge four neuromorphic
chips into one or to split one neuromorphic chip into four.
6.9
FPGA Implementation of a Spike-based Displacement Integrator
(James J. Clark)
In the address event group I did a project, on my own, involving the development of a spike-
based digital implementation of the so-called ”displacement integrator” (sometimes called the ”burst
integrator”) which forms a part of the human oculomotor system (Wurtz 1996). This was designed
and implemented on a single Altera Flex10K20 FPGA chip using the Altera student development
system.
The motivation behind this project was three-fold: - to learn the use of the Altera FPGA de-
velopment system (MAX-PLUS II) - to develop a digital implementation of a spiking neuron - to
implement and test a neural network model of the oculomotor ”displacement integrator”.
6.9.1
The Displacement Integrator
The current view of the human oculomotor system is summarized in figure 6.14.
This project is concerned with the block labelled ”Displacement Integrator”. The purpose of this
block is to convert a commanded eye velocity into an estimate of the current eye position. This so-
called ”efference copy” is compared with the initial target position, resulting in a motor error which
68

(s)
Displacement
(t)
Integrator
eye
velocity
efference
command
copy of
eye position
_
+
Neural
+
Gain
Integrator
target
(t)
(t)
position
eye
(s)
muscle
command
(t) = time coded
(s) = space coded
Figure 6.14: Oculomotor system
is used to drive the eye muscles (after another integration, this time through the ”neural integrator”,
which provides a tonic control signal to the eye muscles).
It had been long thought that the displacement integrator function was performed in the brain-
stem, along with the neural integrator, and that this integration was performed on time (spike-
rate) coded velocity signals. Current thinking, however, places the displacement integrator in the
”buildup” layer of the superior colliculus (Wurtz 1996, Optican, 1995). Furthermore, while the in-
put to the displacement integrator does appear to be a time- or rate-coded signal, the output appears
to be represented in a distributed place- or population-code.
Optican (1995) hinted at a neural model for implementing such a time-code to space-code inte-
grator, but has not, at yet, published any details. Based on the sketchy ideas presented in (Optican,
1995), we propose such a neural model, which is depicted in figure 6.15. This model consists of a
network of assymetrically laterally connected integrate and fire neurons. We show just a one dimen-
sional network, but the idea is easily extended to two dimensions. Each neuron has three excitatory
inputs and one inhibitory input. One of the excitatory inputs provides self-excitation sufficient to
”hold” the current activity level. Another excitatory input comes from a ”position” input. This is to
allow visual input to initialize the state of the integrator (to reflect the location of the saccade target
before the saccade, perhaps). The other excitatory input comes from the output of the neuron’s
immediate (rightward) neighbor. This input is ”gated” by input from a so-called velocity input.
When there is activity on the velocity input, the activity of the neighboring neuron is passed on to
the excitatory synapse. Likewise, the inhibitory input to the neuron, which comes from the neuron
output, is also gated by the velocity input. When there is activity on the velocity input, then, the
neuron is self inhibited which tends to reduce its activity, but is facilitated by whatever activity is
generated by its rightward neighbor. In this way, neuron activity in the network is passed from right
to left upon receipt of spikes in the velocity inputs.
69

velocity input
position inputs
+
+
+
+
+
+
neuron
neuron
neuron
+ -
+ -
+ -
position outputs
= gated synapse
Figure 6.15: Oculomotor system
6.9.2
Digital Implementation
We developed a digital circuit that implemented the displacement integrator model described above.
Each neuron was implemented as a 6-bit up/down counter. The count of the neuron’s counter is a
representation of the activity level of the neuron. The neuron emits a ”spike” whenever the count
exceeds the value of a random number. The random numbers were generated using a linear feedback
shift register circuit. The functioning of excitatory and inhibitory synapses were implemented by
incrementing the counter for each spike received on an excitatory synapse and decrementing for
each spike received on an inhibitory synapse. When a velocity spike was received the counter
was decremented by a fraction of the current count (implementing the gated self-inhibition) and
incremented by a fraction of the current count of the rightward neighbor (implementing the gate
lateral excitation).
The overall circuit is shown in figure 6.16, while the schematic of the individual neuron circuit
is shown in figure 6.17. A 19-neuron system was implemented in an Altera FLEX10K20 FPGA
chip, using the Altera MAX+PLUS II development software tools.
6.9.3
Testing
The displacement integrator network was simulated as well as downloaded into the FPGA on the
Altera UP1 student development board.
Two different networks were simulated, differing only by the amount of activity transferred
during each input velocity spike. The simulation results are shown in figures 6.18 and 6.19. Fig-
ure 6.18 shows the case for a higher weight on the lateral excitation. Note that as the velocity spikes
are received, the activity in the network passes down the line, as desired. Note also that the activity
70

LPM_CONSTANT
result[]
(cvalue)
thresh[5..0]
OUTPUT
act1[3..0]
input_spike
INPUT
AND2
OUTPUT
out1
clr
INPUT
LPM_SHIFTREG
thresh[6..1]
thresh[10..5]
thresh[6..1]
OUTPUT
act2[3..0]
OUTPUT
act6[3..0]
OUTPUT
act13[3..0]
shiftin
shiftout
thresh[15..0]
AND2
thresh[15..0]
AND2
thresh[15..0]
q[]
AND2
NOT
OUTPUT
out2
OUTPUT
out6
OUTPUT
out13
aset
XOR
XOR
XOR
thresh0
thresh5
thresh[7..2]
thresh[11..6]
thresh[7..2]
thresh3
thresh2
OUTPUT
act3[3..0]
OUTPUT
act7[3..0]
OUTPUT
act14[3..0]
master_clk
LPM_COUNTER
AND2
AND2
AND2
OUTPUT
out3
OUTPUT
out7
OUTPUT
out14
INPUT
nt_integrator@91
clk1[15..0]
eq[]
thresh[8..3]
thresh[12..7]
thresh[8..3]
OUTPUT
act4[3..0]
OUTPUT
act8[3..0]
OUTPUT
act15[3..0]
NOT
AND2
AND2
AND2
OUTPUT
out4
OUTPUT
out8
OUTPUT
out15
LPM_COUNTER
thresh[9..4]
thresh[13..8]
thresh[9..4]
clk2[15..0]
eq[]
OUTPUT
act5[3..0]
OUTPUT
act9[3..0]
OUTPUT
act16[3..0]
clk2[0]
AND2
AND2
AND2
INPUT
GND
OUTPUT
out5
OUTPUT
out9
OUTPUT
out16
LPM_COUNTER
bogus_clk
thresh[14..9]
thresh[10..5]
OUTPUT
act10[3..0]
OUTPUT
act17[3..0]
clk3[15..0]
eq[]
clk3[0]
AND2
AND2
AND2
OUTPUT
out10
OUTPUT
out17
NOT
DFF
DFF
thresh[15..10]
thresh[11..6]
PRN
PRN
vel_spike
INPUT
D
Q
D
Q
OUTPUT
act11[3..0]
OUTPUT
act18[3..0]
AND2
AND2
CLRN
CLRN
OUTPUT
out11
OUTPUT
out18
thresh[5..0]
thresh[12..7]
OUTPUT
act12[3..0]
OUTPUT
act19[3..0]
AND2
AND2
OUTPUT
out12
OUTPUT
out19
Figure 6.16: Circuit diagram
LPM_AND
NOT
result[]
data[][]
LPM_OR
LPM_COUNTER
act[5..4]
result[]
data[][]
NOT
act[5..0]
AND2
LPM_COUNTER
LPM_OR
AND2
data[][]
result[]
data[]
q[]
AND2
AND2
OR2
OR2
DFF
LPM_COMPARE
PRN
updown
D
Q
AND2
OR2
q[]
dataa[]
agb
AND2
NOT
CLRN
datab[]
aclr
aload
OR2
aclr
LPM_COUNTER
LPM_OR
data[][]
result[]
data[]
q[]
act[5..2]
AND2
aclr
aload
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
INPUT
OUTPUT
OUTPUT
clr
INPUT
act[5..2]
decay_clk
input_spike
left_act[3..0]
thresh[5..0]
out_spike_en
transfer_clk2
transfer_clk1
vel_spike
Figure 6.17: Individual neuron circuit
71

MAX+plus II 7.21 File: D:\ALTERA\AER\DISPLACEMENT_INTEGRATOR.SCF Date: 07/14/98 20:08:25 Page: 1
Name:
50.0ms
100.0ms
150.0ms
200.0ms
250.0ms
300.0ms
350.0ms
400.0ms
450.0ms
500.0
[I] bogus_clk
[I] clr
[I] vel_spike
[I] input_spike
[O]out1
[O]out2
[O]out3
[O]out4
[O]out5
[O]out6
[O]out7
[O]out8
[O]out9
[O]out10
[O]out11
[O]out12
[O]out13
[O]out14
[O]out15
[O]out16
[O]out17
[O]out18
[O]out19
Figure 6.18: Simulation results
spreads more quickly in the case of the higher lateral excitation weighting.
A simple demonstration was downloaded to the development board. In this demo, one of the
push-buttons on the board was used to gate a clock into the input of the first neuron in the chain,
while another push button was used to gate a clock into the velocity inputs of each neuron. The
outputs of 16 of the 19 neurons were connected to the segments of the two 7-segment LEDs located
on the board. A typical run of the demo proceeded as follows: first the clock to neuron 1’s input was
gated through; the first LED segment would begin to flash as neuron 1 began to increase its activity.
Next the clock to the velocity inputs would be gated through, whereupon successive LED segments
would begin to flash, with a procession through the chain of neurons. When the velocity gate was
released, the propagation of activity down the chain would cease, and the activity of each neuron
would slowly decay away.
6.9.4
Conclusions
We demonstrated that our neural model of the displacement integrator functions as desired, and
should serve as the basis for a more realistic biological model.
The particular FPGA that we used could probably be programmed to hold a network of twice
the size that was implemented. There are larger chips in this particular Altera family. The largest
chip holds approximately 10 times as many gates as the one we used, so we could probably get 400
or so neurons on a single FPGA using our current design. There are some simplifications that we
could do to increase the number of neurons that we can pack in, such as reducing the number of
bits used in the neuron counters. It may also be possible, because of the large mismatch in speed
between the FPGA propagation delays (about 50 ns) and the time-scale of neural systems (about 1
ms), that we can do time-multiplexing of the logic. This may give us a 1-10,000-fold increase in the
number of neurons that we can implement in an FPGA. Thus, I feel that FPGA do have promise for
building large-scale silicon cortex building blocks.
Acknowledgements: ——————-
I would like to thank Altera for donating a UP1 development board with FLEX10K20 and
72

MAX+plus II 7.21 File: D:\ALTERA\AER\DISPLACEMENT_INTEGRATOR.SCF Date: 07/15/98 13:23:50 Page: 1
Name:
50.0ms
100.0ms
150.0ms
200.0ms
250.0ms
300.0ms
350.0ms
400.0ms
450.0ms
500.0
[I] bogus_clk
[I] clr
[I] vel_spike
[I] input_spike
[O]out1
[O]out2
[O]out3
[O]out4
[O]out5
[O]out6
[O]out7
[O]out8
[O]out9
[O]out10
[O]out11
[O]out12
[O]out13
[O]out14
[O]out15
[O]out16
[O]out17
[O]out18
[O]out19
Figure 6.19: Simulation results
MAX7128 gate array chips, and associated development software.
References: ————
Optican, L.M. (1995), ”A field theory of saccade generation: Temporal-to-spatial transform in
the superior colliculus”, Vision Research, Vol. 35, No. 23-24, pp 3
Wurtz, R.H. (1996), ”Vision for the control of movement: The Friedenwald lecture”, Investiga-
tive Ophthalmology and Visual Science, Vol. 37, No. 11, pp 2
73

Chapter 7
Discussion Groups
7.1
The What is computation? Discussion group
(Pamela Abshire)
Traditional methods of assessing computation are applicable primarily within a digital com-
puting framework. These methods typically involve determining the complexity of calculation for
particular algorithms. This complexity can be measured in terms of utilization of resources such
as time, size, length, memory requirements, etc. It is unclear how these methods pertain to many
interesting physical computing systems, including quantum, molecular, analog, and neural systems.
As neuromorphic engineers, we are particularly concerned with the latter two: analog and neural
systems.
The discussion group met three times. The first meeting resulted in many disagreements about
what is meant by computation and about what formal methods are relevant to the study of physical
computing systems. While some people believe that Turing machines might provide an acceptable
theoretical framework, others remain skeptical. A few groups are pursuing information theoretic
methods for quantifying performance of analog and neural systems. In the next two meetings par-
ticipants discussed such information theoretic approaches to analysing performance.
At the second meeting Pamela Abshire presented a physical model for the communication ca-
pacity of an inverter, introducing concepts from information theory as appropriate. Starting with a
symbolic description, she derived the entropy and mutual information of NAND and NOT. From
this she computed the capacity, the maximum information rate which can be transmitted. Then she
considered a simple two-transistor CMOS implementation of an inverter, and introduced realistic
physical models for the noise and signal transfer from input to output. Using these models, she
derived a measure of channel capacity for the simple inverter implementation. This analysis can be
applied to understand the effect of physical parameters on the overall processing, and to compare
different implementations performing similar functions. The performance metric for comparison
would be channel capacity, a fundamental physical property of any communication channel, or, for
a task-related measure, the information rate for a particular input ensemble.
At the third meeting Christof Koch described his work on signal detection and estimation in
noisy dendrites. He and Amit Manwani are working to extend cable theory to include neuronal noise
sources. He discussed two paradigms for quantifying neural computation, signal reconstruction
and signal detection theory, and applied them to a linear cable model with noise sources. Signal
reconstruction investigates how much of the input signal is accounted for by the system’s output.
It provides two metrics for quantifying signal transmission: coding fraction, which is the fraction
of the variance of the original signal which is reconstructed; and mutual information between input
74

and output. Signal detection poses a task, such as a two-alternative forced choice to decide whether
there was a stimulus or not, and constructs the optimal detector for that task. The probabilities for
misses and false alarms provide performance metrics.
The meetings of this group enjoyed lively, sometimes heated, discussion. Most of the discus-
sion topics fall under three categories: benchmarks, technical feasibility, and performance-related
measures. The first of these topics, benchmarks, are commonly used to assess performance, for
example in computer science or in neural networks. It is possible that a set of benchmark problems
would be useful in assessing complexity in analog and neural systems. One measure of complexity
might be the cost of tweaking a particular structure to make it compute some benchmark function.
Sometimes different systems perform the same (or similar) benchmark function, and those systems
could be compared. However, it is also possible that benchmarks will not lead to new insights in
biology, since biological systems seek specific solutions to specific problems, not general solutions
to benchmark problems.
The second of the topics relates to the technical feasibility of information theoretic methods;
they can require explicit linearized models or reams of experimental data. The investigator needs
to understand a great deal about a system in order to apply any of these methods. She needs to
understand, in detail, what sort of noise models are suitable and realistic for analog and neural sys-
tems. Furthermore, she must decide what level of system descriptions are suitable (i.e. input-output
description, detailed biophysics, quantum mechanics). And once a description of performance is
complete, the next step would be comparison among various systems; however, we just don’t know
that much about costs in biology, so assigning relative costs to such design tradeoffs as extra den-
drites versus extra power verges dangerously close to omphaloskepsis. It is not yet clear whether
these information theoretic methods can say anything about higher cortical areas, beyond the pe-
riphery.
General-purpose computing isn’t the goal of neurobiology. Using the term, computation, in
reference to an analog or neural system, usually implies a particular task under consideration; this
suggests the final topic, performance-related measures. A commonly recurring refrain is that only
task-related measures are ultimately of interest. (During one of the meetings Christof Koch re-
marked, “Darwin is the only general theory we have.”) Perhaps it would be feasible to construct
benchmarks which are related to tasks of interest such as face recognition, speech recognition, mo-
tion detection. Given such tasks, it would be necessary to define fidelity criteria for comparison.
Issues mentioned above, such as the appropriate level at which to define a task, pertain here as well.
Very few systems currently allow a description which connects to behavior or motor systems. In the
meantime, progress must continue at intermediate level task descriptions.
This discussion group attracted many participants and produced lively discussions. Perhaps, for
those working in neuromorphic engineering from an empirical or heuristic or design perspective,
the possibility of theoretical foundations is enticing. A computational theory of analog and neural
systems could also provide a more substantial link between those biological wetware technologies
which we attempt to understand or emulate and those artificial technologies which we engineer and
design.
7.2
Neuromorphic Systems for Prosthetics
(L. Smith, P. Abshire, A. Arslan, A. van Schaik, M. Lades, D. Nicholson)
The aim of this group was to discover where neuromorphic systems were being applied and
might be applied in the prosthetics field. The group was not restricted to discussing exactly what
could be done with existing technology, but also discussed what might be possible given (likely)
75

developments in technology. We noted that fields are generally driven not just by improvements in
technology, but also by the possibility of new application areas.
7.2.1
Motivation
It is clear from this meeting that the primary motivations for developing neuromorphic systems are:
Computational neuroscience – i.e. the development of systems for improving understanding
of how the brain works.
To develop new systems (in the broadest sense of the word), systems which cannot easily be
implemented using more traditional technologies.
Prosthetics is one aspect of new systems that this group is interested in: we focus on
what can be done
what can nearly be done
what we would need to be able to do to achieve goals in prosthetics.
7.2.2
Sensing
We define where neuromorphic systems have been or might be applied in sensing:
Hearing: there is already a large body of work in this area, but very little application of
aVLSI technologies. Most hearing aids currently use digital signal processing techniques.
Either different frequency bands of the sound signal are selectively amplified, (or some more
complex sound to sound transform applied), or, in the case of cochlear implants, stimulation
is applied to the organ of Corti inside the cochlea. The nerves of the spiral ganglion are
stimulated. However, because the cochlea is spiral in form, it is not (currently) possible to
apply signals to more than about one quarter of the cochlea. Different products use differing
numbers of electrodes, and different preprocessing techniques. Users of all forms of auditory
prostheses need to learn to hear using these new signals.
Vision: there is a considerable body of work being done on the use of artificial retinas. Primar-
ily there are three groups, one in Germany, under the direction of Professor R. Eckmiller [Eck-
miller], at the University of Bonn, one under Gislin Dagniele and R.M. Massof at the Johns
Hopkins University [Dagniele], and one under John Wyatt and Joseph Rizzo at MIT/Harvard
[Wyatt]. All three of these aim to introduce electrically mediated signals into the retina: Wy-
att and Rizzo propose using an epithelial implant which directly detects light and stimulates
the retinal ganglion cells, although Eckmiller’s group appear to be more interested in using
an external camera, linked by a wireless connection to a retinal implant which stimulate the
retinal ganglion cells. Although not reported in the literature, we understand that users of
these implants require to learn to see using these prostheses.
In addition, there is work on the direct stimulation of the visual cortex, building on the seminal
work of Brindley in 1960. There is work ongoing at Utah [Normann], and believed to be also
at NIH and the University of Waterloo, Canada. The Utah group works by using a 10 by
10 array of electrodes inserted into the visual cortex. This is a highly invasive procedure,
requiring the removal of a small piece of skull.
76

Touch: there is ongoing work for people with spinal cord injury, aimed at restoring some sense
of touch at Case Western Reserve University, under Dr Clayton van Doren. Little information
is available on this [van Doren]. Much of the work in this area is related to attempts to improve
the quality of life of people with nerve (and particularly spinal cord) problems (see section
7.2.3).
We also count in this section cross modality transforms – i.e. where information from one sense is
presented to another sense – such as work on aids for the blind, and (possibly) pain relief. For pain
relief, we considered deep brain stimulation, drug pumps, and electrical stimulation of the spinal
cord (TENS). The group also discussed non-invasive techniques of cortical stimulation, (such as an
inverse from of SQUID synaptic current detection), but decided that this was not currently possible.
In addition, we included under prosthetic systems, techniques for noninvasive wireless monitor-
ing of patient vital signs.
7.2.3
Prosthetics for sensorineural and sensorimotor applications
The second prosthetic application area we identify is in neuron–silicon and silicon–muscle inter-
connection. Such applications would be useful in repairing damaged peripheral nervous systems.
This is a common and serious problem, often resulting from accidents etc. Deficits may be due to
damage to the spinal cord, or to damage to the peripheral nerves, and/or due to damage (or loss of)
limbs.
Prostheses may thus need to
pick up brain originated nerve signals
transmit nerve signals to either (undamaged) musculature or to (electromechanical) prostheses
pick up proprioceptive information from (either undamaged but un-innervated) limbs or from
prostheses
There are a number of serious problems with this type of work. For example, damaged nerve
cells tend to die off, and it is difficult to make long-lasting interfaces which send or receive nerve
impulses.
There is a great deal of work which has been done on attempting to make neural interfaces,
particularly for people with spinal injury. A good review of current research is to be found in
[APA]. There is considerable research in the use of methylprednisolone, and the introduction of a
rich growth medium round the nerve stump using Schwann cells and IN1 in nerve damage limitation,
and in attempting to convince the nerve to regrow. Additionally, a key signalling agent gene, Shh,
may help spinal nerve neurons to re-grow.
However, these techniques do not directly tackle the problems of silicon/neuron interfacing.
There are (at least) two groups working on this. One, led by Fromherz at the Max Planck Institute
for Biochemistry in Germany, is based round a neuron/silicon junction which appears to be a form
of field effect transistor where the nerve (suitably surrounded by a layer of lipids to protect it) forms
the gate of a transistor. This allows electrical communication from the nerve. Communication to
the nerve can be accomplished using capacitative electrical connection.
There is work at the University of Edinburgh (under Murray and others) on growing neurons on
specially prepared substrates. These substrates include electrical connections to/from the neurons.
All of this work is still at an early stage, but holds promise for direct silicon/neuron interconnection.
This could eventually allow direct neural control of prostheses, and conceivably also feedback from
these prostheses to the brain, mimicking proprioceptive feedback.
77

Incorporating touch sensing into limb prosthesis is very important for their users. Sensor sys-
tems for these systems are in development [OandP], and these clearly need to be interfaced on to
neural systems. For upper limb prostheses, and particularly for hands, feedback on grip strength is
crucial for safe usage, and this requires some form of feedback to the user. One suggestion which
has been made is that of cross-modality feedback (such as auditory feedback).
7.2.4
Classifying prostheses techniques
We attempted to classify the different possible modes of prosthesis/neural system communication
into four classes:
Peripheral nerve stimulation
Sensory nerve (or sensory system) stimulation (e.g. stimulating the spiral ganglion neurons,
or the retinal ganglion cells)
Stimulating the brainstem nuclei
Stimulating the cortical areas directly
Prostheses for limbs tend to use the first of these: auditory and visual prostheses tend to use the
second: there is work on stimulating the brainstem nuclei and cortical areas, but these are highly
invasive procedures.
7.2.5
What can neuromorphic systems offer
There already are some neuromorphic systems performing prosthetic tasks. In particular, there are
pacemaker systems for providing stimulation to the heart muscles, and cochlear implants. (We
do not distinguish between different technologies underlying these devices - we note that cochlear
implants are generally based on DSP technology). We can consider what is generally required
from neuromorphic systems: whatever the actual function, they need to be low power, and either
implanted, or wearable. This makes constraints on size, and on the technology used for their imple-
mentation. Certainly, aVLSI systems are generally small, and low power, but the same is true for
advanced DSP systems.
The neuromorphic systems may be
direct prosthetic sensors i.e. transducers (such as an artificial retina, or a touch sensor)
systems which process prosthetic inputs (such as systems for processing the microphone input
for auditory prostheses, or for processing input from touch sensors)
control systems for prostheses (such as systems for controlling the movement of prosthetic
limbs or artificial hands)
systems which perform signalling (such as those for transferring proprioceptive feedback to
neural systems)
The groups conclusion was that neuromorphic systems had a great deal to offer in all of these
areas: however, we felt that the biggest advances needed to be made in silicon/neural interconnection
as this was seen to underlie many of the most interesting applications.
78

7.2.6
References
Eckmiller
see http://www.nero.uni-bonn.de/ri/retina-en.html
Dagniele:
see http://www.spectrum.ieee.org/publicaccess/9605teaser/9605vis2.html
Wyatt:
http://www.spectrum.ieee.org/publicaccess/9605teaser/9605vis5.html
Normann:
http://www.spectrum.ieee.org/publicaccess/9605teaser/9605vis6.html
van Doren:
section 6 of http://me210abc.stanford.edu/CDR-haptics/Files/Position Papers/vandoren.txt
APA:
http://www.apacure.com/pirfal97.html
OandP:
http://www.oandp.com
79

Chapter 8
Personal Reports and Comments
The following section contains, with as little editing as possible, the personal comments received
by the participants in the days following the workshop. We solicited comments about their personal
experiences and suggestions about how to improve the workshop organization.
Pamela Abshire
I was at the Telluride Workshop on Neuromorphic Engineering for the entire three weeks, from the first
Sunday (Jun 28) until the final Friday (Jul 17). This was my first time at the workshop, and I found that
the lectures every morning were an excellent way to become acquainted with work outside my group and
outside my area. In the afternoons I participated in the workgroups on floating gate circuits and address-event
representation. I also joined discussion groups on learning in VLSI and prosthetics, and led the discussion
group on computation.
I especially appreciated talks by Christof Koch, Rodney Douglas, Avis Cohen, Tobi Delbruck, Wolfgang
Maass, and Shihab Shamma. Neuromorphic engineering is quite an interdisciplinary field, and consequently
results are reported in a very broad set of journals, from biology to engineering to computer science. It was
refreshing and stimulating for these talks to be presented with an engineering emphasis and in an atmosphere
which encouraged discussion. In a slightly different vein, Tobi’s talk provided a detailed discussion of his
and Shih-Chii’s retina pixel, a circuit which has become ubiquitous in analog VLSI imager design.
I found the workgroup on floating gate circuits very informative and well-organized. It is my belief
that the extraordinary capabilities of neurobiological systems rely heavily on adaptation mechanisms. I was
particularly keen to attend this workgroup because floating gate techniques seem like the most promising
technology for integrating a similar density of adaptation into silicon circuits. It is also my impression that
floating gates can be tricky to design and use, and I really appreciated the opportunity to learn firsthand from
Chris Diorio, Brad Minch, and Paul Hasler. In the past I have designed a number of adaptive VLSI chips,
and all of them have used volatile capacitive storage. These chips, as well as my future chips, will benefit
from the understanding and techniques which I gained by participating in this floating gate workshop. The
workshop would have benefitted from a laboratory component, so that the participants could gain experience
in testing and tweaking chips which use floating gates.
The address-event representation workshop was also very interesting for me. I appreciated Kwabena
Boahen’s careful introduction to asynchronous VLSI. Asynchronous VLSI has become very important for
AER, and someone from Alain Martin’s asynchronous design group at Caltech would have been welcome
at Telluride. I understand that this lab has written design tools for asynchronous VLSI, but I know nothing
more about them. It seems that the Telluride meeting provides an excellent opportunity for such “technology
transfer” among academics.
Most of my spare time at the workshop was spent in preparing for the discussion groups on computation
and continuing ongoing research in the same area. For one of the meetings I prepared and presented a
physical model for the communication capacity of an inverter, based on a simple CMOS implementation. I
also worked on a detailed biophysical model for the communication capacity of early blowfly vision.
80

Elizabeth J Brauer
My expectations of the Telluride Workshop were to meet other researchers and discuss issues in neuromorphic
engineering, particularly in the area of locomotion. The primary benefit of the workshop was the opportunity
to meet researchers in the specific areas of lamprey research, locomotion, and analog circuit design. The
long-term benefits of the contacts developed at the workshop are tremendous. The secondary benefit of the
workshop is to stimulate some ideas about building robots. In particular, the Locomotion Work Group was
quite helpful in this regard. I will be developing a lamprey robot as a result of the Telluride Workshop.
To improve future workshops, I would suggest that the lecturers prepare simple handouts containing the
figures in the presentation and a list of references. This would be especially helpful for someone who is not
an expert in the subject area but interested in exploring the topic further. Better chairs in the lecture room
would be nice, since we spent so much time there. I would also suggest ending discussion groups by 9 pm.
The workshop was extremely well organized. The organizers are to be commended on a job well done.
The facilities at the school were wonderful, as was the Telluride Academy staff. I would highly recommend
this workshop to others.
Marc Cohen
This was my first visit to the Neuromorphic Engineering Workshop and I enjoyed it immensely.
Each morning from 8:30am to 12:00pm I attended lectures. I particularly enjoyed the series of talks
by Rodney Douglas. He began with the morphology of the neuron and dendrites, continued with cortical
structure and introduced some models for computing with spiking neurons.
Tobi Delbr¨uck’s talk on the circuit details of the adaptive photo-detector was excellent. Many times
circuit details and analyses are covered in a ”hand-waving” manner. Tobi did a great job of leading us
through his second order analysis of Shi Chii’s adaptive pixel.
Between 2pm and 4pm each day, I attended the floating gate workshop lead by the tag-team Chris, Brad
and Paul. This was a great series of lectures on the development and uses of analog floating gate transistors. I
have experimented with floating gate circuits before this workshop and now feel that I understand the device
physics and design and synthesis techniques very well. It was unfortunate that we did not get hands on
experience with floating gate devices. I did however manage to get Paul Hassler to help me verify correct
operation of a floating gate circuit on an adaptive retina chip I had brought along to the workshop.
I also participated in two discussion groups; ”What is Computation?” and ”Learning in Silicon”. The for-
mer discussion group was lead very well by Pamela Abshire. The discussion was lively and at times heated.
It was unfortunate that so much time was spent on arguing over semantics. Pamela presented her view of
computation from an information processing point of view. Christof Koch also presented their work on infor-
mation theory applied to computation in dendrites. The ”learning” discussion group wanted to concentrate
on how to learn with spiking neurons. Unfortunately, it was never demonstrated that learning with spiking
neurons was the correct way to proceed. It was argued that the kinds of learning that were discussed namely,
LTP, LTD, Hebbian, anti-Hebbian and synchronization were all also possible with continuous-time learning
rules.
The project I worked on was part of the Event Address Workshop led by Timmer Horiuchi and Kwabena
Boahen. Our task was to build hardware that would take address event input from a 1-D retina or cochlear
and learn a topographic or tonotopic mapping. This project was started at Telluride97 but had not reached the
hardware implementation stage. I built up hardware consisting of a receiver board (using one of Kwabena’s
64x64 receivers), a PIC chip to control the video output of the receiver and a daughter board which could
interrupt the data path between the sender and the receiver. A PIC 17C44 microcontroller would execute the
algorithm which would remap the incoming addresses (scrambled) to ordered output addresses in real time. I
obtained real event address data from Timmer and together with Tim Edwards experimented with a few new
algorithms using the real data. Ultimately, the algorithm which was derived at Telluride97 was programmed
onto the PIC by Tim Edwards. Unfortunately, I could not find a persistent hardware error and the whole
system never operated. I plan to continue with this project back at Johns Hopkins University with Timmer
and Tim.
81

Craig DeLancey
In terms of my own edification, this year’s conference was an unmitigated success. I learned much that was
new and exciting for me; especially useful were the lectures on neuroscience and biology, the first week of the
aVLSI tutorial, and my one-on-one interactions with all the other researchers. Much of what I learned will
be of substantial utility in my own future research in AI. Also, the exposure to the robot projects was both
instructive and de-mystifying; this has inspired me to pursue the use of more robotics in my own research.
I fear that, for the Workshop, I was not equally helpful: I did not complete a significant project during
my stay. I did, however, learn some things which will help me this year in some practical projects – and so, I
am pledge still to contribute to the cause. For example, in the last week of the Workshop I started coding the
rough beginnings of a project to use, in one of the organism simulations we are developing at IU’s Adaptive
System Lab, saliency maps that integrate motor and visual processing. This was inspired by some of the
work on saliency others were doing, and also by Terry Sejnowski’s talk on recent research revealing that the
traditional division between motor and visual cortex may be too simplistic. This is work that I will continue
as soon as I return (and I shall be sure to let you know about any successes in this undertaking).
I have only two pieces of advice for future conferences. First, you could make it even more clear be-
forehand that we should bring as much of our own projects with us as is possible. The people who achieved
something significant during these three weeks were mostly those who brought their ongoing research with
them. If I knew beforehand what I know now, I would have fit some portion of my own research into a neu-
romorphic project that I could have brought with me. (Perhaps, however, my failure in this regard is because
it is my first time here; I’ve heard it said that people who come several times get the most out of it!) Second,
it would be beneficial to suggest some readings beforehand for the tutorials, if at all possible. I would have
learned more if I could have prepared a small amount.
My advice for invitees is to include, in some future Workshops, some scientists working on motivation.
This is a topic about which I am biased, because it lies in the domain of my own research. However, along with
learning, perception, and locomotion, motivation is one of the fundamental aspects of adaptive or intelligent
behavior. I feel that the kind of research that happens at the workshop is well suited to explore this. Paul
Verschure’s dream of the workshop culminating in a learning robot with integrated perception should be
expanded to include motivations.
Finally, I want to thank all of the organizers. I am grateful for having had this opportunity, and I appreciate
all the work that went into preparation and management. I believe I have made some lasting friendships here,
I certainly had a great time, and what I learned will be of lasting benefit to me.
Tim Edwards
I spent a total of six days at the Telluride workshop, which was considerably less than I wanted to, although
with a conference in Los Angeles at the beginning of July, I didn’t have much choice. Although I did not stay
for very long, I was able to immediately join the group working on developing, in hardware, an algorithm
for automatic unscrambling of randomly mixed address events. The active members of the group were Marc
Cohen and Tim Horiuchi. Timmer provided data from his 1-D retina chip which we used to confirm efficacy
of the algorithm in Matlab. After extensive investigation of the algorithm, I took a crash course in PIC
programming and converted the algorithm into PIC assembly code. The intended end result was to be a small
circuit board containing the PIC microcontroller, placed between an address-event transmitter (the 1-D retina)
and receiver (which outputs a video signal to a monitor). Unfortunately the wire-wrapped circuit board did
not function properly, apparently due to loose wires or components, and so the complete system was never
realized. Nevertheless, I derived a great benefit from the experience, which was learning how to write PIC
code and verify it using the PC-based PIC emulator.
As usual, the most enjoyable part of Telluride was getting together with people from different back-
grounds with different perspectives on neuromorphic engineering. Compared to the last time I attended,
which was three years ago (1995), this year’s conference was better organized, and on the whole, people
had more realistic expectations on what is possible to accomplish within the time frame of three weeks; as a
result, they got more accomplished.
I have no specific recommendations to make for next year. Three years ago, I made a number of recom-
mendations, and it appears that they all happened, so I’m left without any more ideas.
82

Mete Erturk
We have developed an interface board for two cochlear chips (from Andreou and van Schaik). I was involved
in the design and troubleshooting of the boards. We implemented a non-linearity (a hard limiter) and a lateral
inhibition network on the 32 output channels of the cochlea. The boards were then connected to a PC which
ran cross-correlation and stereausis algorithms and sent motor commands to a Koala robot. The robot had
a dummy head and two microphones (embedded in the head) mounted on it and performed sound source
localization tasks very successfully.
I was involved in hardware design and development of an Address Event Remapping project. The project
involved the use of a PIC (Microchip-17C44) to learn tonotopic axis remapping through un-supervised learn-
ing.
I participated to a project using Horiuchi’s 1-D retina to emulate a cochlea. We have presented the retina
with plots of moving ripples and analyzed the spatial derivative and direction of motion outputs of the retina.
Charles Higgins
This year I gave three talks at Telluride and got input on my work from a wide range of disciplines. I ended
up with a bunch of new ideas and tentative solutions to some long-term problems with my multi-chip systems
research. Overall, it was a very valuable experience.
My primary suggestion for improvement of the workshop is better allocation of workbench space. Next
year, everyone who needs bench space should sign up (via the web) and be assigned some area in some room.
This would avoid the contention we had this year (e.g. This space is MINE! Keep off!). I would suggest
use of the center tables for additional workbench space (rather than a general construction area), and more
efficient use of other rooms (such as the computer room periphery).
My second suggestion is a series of hyper-short tutorials at the beginning of the workshop to allow people
a quicker decision as to which workgroup to join in. For example, in the first two days, there should be *half-
hour* talks on the subjects:
- What is analog VLSI?
- What is AER?
- How does a biomorphic robot work?
- What can you do with a Khepera/Koala?
This year, most new people didn’t know what AER stood for until late the second week.
David Klein
I attended the workshop last year and I found it to be a more productive experience this time around. I was
sure Particularly, I was impressed by the attempts to keep the number of discussion groups to a minimum and
to motivate people to start working on projects as soon as possible. Additionally, I found that the group of
participants this year was excellent. They were more motivated and ready to work on projects than the group
was last year.
This time around, I was a member of a number of project groups, including: AER, Audition, Visual
Saliency, and Behaving Robots. I co-managed the auditory project group, and thus was involved in most
projects in that group (see the auditory project group report for more details). In the AER project group, I
worked on the 1-D dynamic re-mapping project and the 1-D sender to spectrotemporal receptive field receiver
project. In the visual saliency group, I helped modify a visual saliency program to control a Khepera robot.
Jointly with the audition and behaving robots groups, I used the input from some silicon cochleas to guide a
Koala robot towards sound sources.
My main personal goal for the workshop is to begin to find ways to implement spectro-temporal response
fields (of auditory cortical neurons) in hardware using networks of neuron-like elements. I achieved this goal
to a limited extent from my work in the AER workgroup. I plan to continue working on this project, in
collaboration with Mete Erturk, Andreas Andreou, and Tim Horiuchi.
83

My suggestions for future workshops are standard. Productivity could increase if there was more hard-
ware actually ready to go with suggested projects before the workshop. Then it wouldn’t be the third week
before anything was working and ready to be applied towards more interesting tasks. I was intrigued by Paul
Verschure’s ambition to merge these different senses and different feature extractors into a cohesive project.
I hope that next year this may be approached more successfully.
Yuri Lopez De Meneses
1- The technical and didactic support that the organizers provided was perfect. We were able to work as well
as in our own office. 2 - I came expecting to gain some hands-on experience as well as some theoretical
knowledge, and I was not disappointed. I feel that I have learned a lot, and the Workshop has been an eye-
opener for other fields. 3 - The combination of lectures and hands-on projects has been very enriching. I
have been involved on two projects, and at least one of them, on on-line robot learning, will go on as a post-
Telluride cooperation. 4 - It would be interesting for the volleyball discussion group to held an initial lecture
on the rules and tactics of the game. 5 - It would be interesting to have more biologists, especially if they get
involved in projects with engineers.
Wolfgang Maass
For me the most fruitful and stimulating activities during this workshop were a large number of individual
discussions and brainstorming sessions regarding various problems of computing and learning that arise in
the three main application domains considered in this workshop:
a) biological systems
b) aVLSI, especially those involving pulses
c) robotics.
I enjoyed very much the opportunity to have at this workshop direct access to experts from these three
application domains, which I would not be able to meet at conferences in my own discipline (computer
science). Especially my discussions with Rodney Douglas, Paul Verschure, Kwabena B. Boahen, Christof
Koch, Chris Diorio, Andre van Scheik, and Gert Cauwenberghs brought me new insights and ideas that will
have a significant impact on my research during the coming years. Some of this research will be carried out
in collaboration with these colleagues.
Apart from this, I learned quite a bit from the daily lectures in the morning, especially those that intro-
duced me to problems of neural computation that were new for me, such as problems related to locomotion,
audition, and VLSI.
In addition I participated during the first half of the workshop in the tutorial on floating gates, and during
the second half I carried out some practical learning experiments with mobile robots. Both of these activities
will be very useful for the work at my university during the coming year, since I am currently involved (jointly
with colleagues from other departments of my university) in developing a curriculum and research program
in robotics. I also will explore the possibility to initiate (possibly in collaboration with the company AMS in
Graz) some research in Austria on floating gates.
I regret that I was not able to participate also in the workshops on aVLSI and AER, but the days at
Telluride were simply too short.
Regina Mudra
Wanted to help people to become familiar with K-robots. For this reason I prepared with Paul Verschure
a documentation and C-software package called Ikhep. This package was based on older work by Paul
Verschure to implement a Braitenberg model on K-robots. We used this example to explain to people how to
prepare their own K-robot setup and make their own experiments with Khepera.
I was also interested in discussions about image pre-processsing, types of representation and robotics.
84

Suggestions and comments:
Telluride is a good place to come in contact with and to do such good small projects like the landmark
and path-finding task.
I was surprised that people, who at first wanted to learn to work with the K-robots, lost interest after they
had heard that they have to install their own setup. interest. Besides this there were some hardware problems
with the framegrabbers and connectors, but these could mostly be solved.
To Giacomo, Timmer and also Dave: a big thank you for installing and updating the Telluride web-pages,
it was a big help to find out about all the lectures, seminars and projects.
Perhaps it would be good to let the people also describe their interests on the sign-in pages so interested
people can contact and build their groups before Telluride and prepare their projects in a better way.
Thomas Netter
I was satisfied beyond expectations with this workshop. I fulfilled my hopes by being able to play with Alan
Stocker’s retina and building a somewhat aeronautical object which worked at first go. I was delighted to meet
many bright and interesting people. I was amazed by how much gear was available. It’s the first workshop I
ever attend and am therefore very impressed.
Although I did not expect so many neurobiology-related lectures I was very happy with them. I think the
biology/electronics ratio was well balanced. I’m sorry that attending the aVLSI lectures prevented me from
attending AER. But I’ll follow AER next year!
I think there is no way to find such condo arrangements in Europe and the workshop should therefore
remain in Telluride.
Suggestions for next year and what I missed at the workshop:
* A warning regarding the flights : You might be bumped off and your luggage too. Keep your toothbrush
with you at all times just as you keep your transparencies. Check the guys who load the plane with luggage.
Be ready to beg and plead the pilots to take your stuff. * More presence of Terry Sejnowski
* An amplification system for some speakers which were difficult to hear. Maybe we should put a sign at the
back of the lecture room with SPEAK LOUD AND CLEAR on it?
* Some sort of Michelin guide to pick the best hamburger and sandwich places – let’s face it: Baked In Tel-
luride really is lousy!
David Nicholson
I am a biologist and my present research fellowship is in the area of insect navigation at the University of
Sussex Falmer, UK. I am also a member of the Center for Computational Neuroscience and Robotics at the
same location.
When I first signed up for the workshop I expected to learn a lot about VLSI and electronic engineering in
general and not much about biology. So, I was pleasantly surprised to find Avis Cohen and Thelma Williams
describing their work on the lamprey. I originally intended to participate in 3 workshops: Basic VLSI, AER
and behaving systems. However, once I had been to a few lectures/talks I became very interested in Mark
Tilden’s work and so I joined the locomotion workshop and helped to build the robot lamprey and the flying
robot. I also continued with the basic VLSI workshop. I found the fusion of biology and robotics within the
locomotion workshop very stimulating and thought provoking. As part of my work I aspire to the production
of biologically plausible robotic models of animal behaviour. So, whilst continuing to pursue my behavioural
experiments with insects i intend to use Mark’s ’Nervous Net’ architecture as a starting point to build a
walking robot of my own. I need to get a feel for the sort of engineering problems inherent within robot
building and I am attracted by the minimalist approach (as regards internal processing versus behaviour)
which these systems afford.
I attended three discussion groups also: Pamela Abshire’s ’What is computation’ , Leslie Smith’s ’Pros-
thetics’ group and Nicol Schraudolph’s ’Flying Robots’ group. All of them produced a lot of lively discussion
85

late into the night and the flying robot was produced as a result of Nikki & Mark’s ideas. I must also mention
how useful I found Rodney Douglas’s basic neurological tutorials.
How could the workshop be improved? More biology of course! But then I would say that wouldn’t I,
being a biologist. Why not split the morning lectures up more. Then we would be more receptive throughout.
Perhaps a longer break in between them and a shorter, later lunch-break. I don’t think this would intrude
upon the afternoon workshop time as people like to work into the night anyway.
I found the diversity of content of the workshop very impressive; from visual processing to Andy ‘Wuen-
sche’s random boolean networks via Tim’s model of olfaction. Anyone who wants to be aware of new
possibilities and have hands on experience of robotics should certainly attend this workshop.
Maximilian Riesenhuber
I came to Telluride expecting a couple of things, namely i) to learn about the basics of aVLSI, ii) to learn
how useful neuromorphic aVLSI chips are for neuroscience and iii) to see how neuromorphic chips can be
employed to tackle engineering problems, e.g., in computer vision or robotics. Regarding the first point,
I thoroughly enjoyed Joerg’s aVLSI tutorial which I was able to follow even without being an electrical
engineer. The only critique I’d have is that I didn’t profit that much from the last week that focussed on chip
design which seemed a little bit too specialized. Instead, I would have liked to learn more about some more
advanced neuromorphic hardware, such as your WTA chip.
I also learned something about ii), insofar as in many cases it is not clear to me what additional insights an
implementation in aVLSI offers when one already has a mathematical model, e.g., of a neuron. For large scale
systems like the silicon retina, efficient simulation might not be possible anymore, but the question remains
what insights we gain from the aVLSI implementation that cannot be obtained from a simulation of a smaller
number of photoreceptors. The biggest opportunities for neuromorphic engineering I see in connection with
applications — I was especially impressed with the real-time processing capabilities of Joerg’s retina, for
example.
Apart from the tutorials and projects I was involved in (especially the projects were a crucial component
of the “Telluride experience” for me — neuromorphic engineering seems so easy until one tries to do it
himself :) ...), I very much liked the format of the workshop with the great variety in lectures in the morning
and also the more informal elements like the BBQs that fostered so many inspiring interactions among the
participants. I was impressed by the smooth organization of the whole event — including everything from the
accommodations to the local computer network. A big thanks to the young turks for making the workshop
such a memorable experience :) ...
Nici Schraudolph
The workshop was great! I wanted to learn more about working with Kheperas, and Mark Tilden’s robots
— and promptly wound up in two collaborations involving Kheperas (both of which will continue), and two
projects with Mark. My specific goal to investigate navigation without directional control in a flying robot
was also realized, so the workshop fulfilled my expectations on all counts. In addition, the rich interactions
with other participants were invaluable, and resulted in several new impulses for my research program. For
instance, I got interested in Wolfgang Maass’ model of temporal coding in spiking neurons, and expect to
work on efficient, biologically plausible learning algorithms in such systems soon.
My only critique would be that at times the schedule was so crowded that there wasn’t enough time left
for project work. It may also be a good idea to charter a flight from/to Denver at the time when everybody
arrives/leaves, since getting seats was quite difficult. All in all it was a great experience though — my thanks
to everyone who made it possible!
Mario Simoni
My goals for the workshop were to learn more about floating gate devices and to share and get some ideas
about how to implement learning mechanisms in aVLSI. I think my goals were met in terms of learning about
floating gate devices, except that it would have been nice to be able to actually play with some floating gate
86

transistors. I was able to get some ideas about learning and adaptation circuits from the ”on-chip learning”
discussion group. There were other talks about higher level forms of learning from a behavioral perspective
which I enjoyed, such as those ideas presented by Jim Clark. I also benefited from meeting Thelma Williams
and talking to Avis Cohen about pattern generation in the lamprey.
I think next year it would be a good idea to have some talks about various ways that people are modeling
neurons in silicon and how they are using these models. Perhaps, though, this would make a better discussion
group rather than a formal talk.
I think this conference was fairly well organized and ran smoothly. (Larry [Rosen] did a great job, get
him again next year!) I liked the selection of talks, and I think I got a good understanding of the different
areas that people are working in. One thing that I think would help the workshops a lot is if we could prepare
for the projects before the workshop begins. This would mean a little more commitment from people, but I
think we could get a lot further while everyone was there. One of the things I puzzled over before coming
was how to prepare. I think if I had known more about what the locomotion project was going to be about
and been in contact with the project leader, I could have prepared things ahead of time and brought more
pertinent equipment and circuits.
I have thought of two people who could contribute a lot to the workshop next year: Larry Abbott and
Nancy Koppell. Both are mathematicians who have done extensive modeling of various properties of neu-
rosystems.
Malcolm Slaney
I was only able to attend part of the last week of the Neuromorphic workshop this year. But what a week it
was. I was very impressed with the level of expertise and creativity that I saw.
My first reaction was there are a lot of really smart people, working on hard problems (perception), with
one hand tied behind their back (analog VLSI). But after seeing the student demos at the end of the workshop
I changed my mind. A large amount of work got done in a very short amount of time. I’m not sure the same
level of performance could have been achieved with conventional means.
The biggest surprise of the workshop, for me at least, was the large number of ’bots that were assembled.
Some were based on commercial test-beds. Others were put together with scotch tape and cardboard. A
number of audition and vision chips were available and people cobbled together what ever they needed to
make things work.
I gave a lecture on computational models of pitch. I think it was well received and there were many
good questions. I especially enjoyed the lecture’s by Christof (noise models of neuronal cables) and Nicola
Ferrier (stability of real-time tracking). The last day’s demonstrations of all the work was most valuable.
It was a shame that I had to miss the preparatory work, but it was really good to see all the work that was
accomplished in a few short weeks. The ’bots were especially interesting to me.
All of the students were interesting to talk to. But meeting and working with Andreas Andreou (speech
recognition models) and Paul Verschure (machine learning) was especially good. An unexpected connection
was the interest from Andreas and Shihab Shamma in my pattern playback tools. I expect we’ll work together
in the future.
I’m really glad that I got to participate and to see everybody’s work. I look forward to participating again.
Leslie Smith
I very much enjoyed being part of such a stimulating group of people working in the area of neuromorphic
systems. This was important for me, as I am not normally part of a large group of researchers. The meeting
was well organized, and the schedule tight.
My two interests are in early auditory processing, and in better understanding of real neurons and their
place in real networks. I had many useful and interesting discussions with the groups working in auditory
processing, although I did not find time to take a more active part in the interfacing of the auditory chips on
to robotic systems: this was primarily because there were just so many things to do (see below).
I attended nearly all of the lectures given (as well as giving one). In retrospect, though this was very
interesting, it took away from the time I might have otherwise spent on project work. I attended the course
87

on analog VLSI design - indeed, this was very important for me, as I have a new research fellow starting
in the Autumn, and I wanted to be more conversant with this technology. I learned a great deal about this
technology, and about the design tools available. I also attended the first week of the floating-gate tutorial
group: this was interesting to me as I see this technology as underpinning adaptive aVLSI systems, and I
wanted to understand how it functioned.
I organized the prosthetics discussion group, as well as attending meetings of some of the other groups.
The prosthetics group was important to me, as I was (and remain) concerned about the need for there to
be specific applications for the neuromorphic technology. I consider that replacing damaged senses, or al-
lowing people to communicate directly with replacement limbs is an important and feasible application for
neuromorphic technology. The report of this group has been submitted separately.
I believe that I gained a lot from the meting: I met researchers who I had met briefly at the EWNS1
conference, or who were previously only email addresses to me, and I was able to discuss issues which I
think are important in audition and in the silicon neuron with the leading experts in the field.
Peter Stepian
This was the first time that I have attended the Workshop. Overall everything was excellent! Being relatively
new to the emerging field of Neuromorphic Engineering, the Workshop was very informative and the work
groups provided a way to contribute constructively while at the same time learning. Below are outlined some
positive aspects of the Workshop followed by some suggestions for the future.
The web based system to register for the Workshop is very good. This is especially true since many
people are geographically far away. The mix of lectures, discussion groups and work groups was also good.
The length of the Workshop, although at first thought a little long, ended being a good length, especially since
the project groups can take up a lot of time. The amount of equipment available to conduct the work groups
was astounding, considering the location of the Workshop. Having equipment available is very important
since the work groups take up a majority of the time. The computer support and network connection was
flawless.
To ensure that the participants of the Workshop know what is going on right from the start, it would be
nice if the program for the full three weeks be finalized right at the start. This could be done by extending
the web based registration to include discussion group topics and time slots. Also the scheduling of the work
groups could be done before hand with more detailed descriptions of individual projects. This would save
time organizing them when the Workshop has already started. The mix of lectures was good, although it
would be nice to have more on the biology of the brain. These were the most interesting as they gave an
insight into how we should be designing systems.
The Workshop is a great way to bring together so many people from different fields all working towards
a common goal. The organizers are to be commended for making it all possible. I hope that the Workshop
continues into the future and more people can benefit from it as I have.
Alan Stocker
This workshop was even better than the last one. There was much collective effort in realizing good projects
and creating teamwork. People showed good social skills (due to the world championship ?) and therefore
there was always a friendly and comfortable mood ’in the air’.
I personally found a platform (discussion group) to present my project and get response from a variety of
people and experts which is an important point in a grad-student’s life. Further I could compare solutions and
approaches - and more important, improvements during one year - from other people working in the same
problem field.
On the other hand, the workshop gave me an opportunity to work on a topic far out of my usual research, the
stereo-correspondence problem. There was a second project I was sort of involved since Thomas Netter used
one of my 1D motion chips for his ’flying cardboard-box’.
I attended almost all of the morning lectures, except a few in the last week. This was mainly because the
projects had to come to an end and computer resources were quite limited.
88

Suggestions:
More PCs.
I think it would be a good idea to couple the lectures and the tutorials closer - both in content and time
- together. I suggest a 2/3 day block of morning lectures and afternoon tutorial/work-group.
I would appreciate if the important people could get involved more. This depends also on the time
available (see Terry) for them but the more the better.
Oh, before I forget: Giacomo and Timmer did a great job !
Mark Tilden
The original plan was to bring my largest robot to the workshop and, while it developed slowly, work out
collaborations with others to test out their eyes, ears and etc. as a prelude to a more systems integrated
approach to neuromorphic physics. As with mice, the best laid plans don’t always pan, and I instead spent
the first week trying to get the ”Roswell” large walker operational for the parade, only to suffer successive
failures due to electrical motor problems. At the same time I worked with the Locomotion group to plan out
the design of an autonomous robot lamprey for study. The first week planning, the second week building,
and the third repair as the power supplies the robot ran from turned out to be rather twitchy. As a result, little
data was taken, but the robot will slither again.
Another project was the development of an interesting Braitenberg flying device which has strong kit
potential. Using the new micro-hextile boards, it turned out to have a broad range of interesting behaviors,
and with luck, will make for a flying-joust competition next year which should be cool.
As usual, collaborations were made, promises made for next year, and laughs and beer were consumed.
As usual, a good time.
Rufin Van Rullen
My overall feeling about the workshop is that it has been a strong and useful experience for everybody,or at
least for me.
¿From a more personal viewpoint, I have to point out that I am more a software than a hardware-based
computer scientist. Therefore, what I expected from the workshop was mostly to be introduced to the field
of neuromorphic engineering, to learn the basic skills that are necessary to understand the work of other
people on silicon chips, hardware-implemented perception and behavior. These expectations have been fully
satisfied by the workshop. I think that it is important to keep a lot of introductory lectures and workgroups
for next years workshops, so that people like me, with a great interest in the topic, but a few knowledge of it,
can come and enjoy working with more experienced researchers. I also hope you will keep the format of the
workshop, because 3 weeks, though it might seem a long time at a first glance, were just enough for me to
get familiar with the concepts and the tools that I had to use. Furthermore, it allows to create a lot of personal
and professional contacts with people, which is often difficult in one week-long conferences or workshops.
In that sense, the informal gatherings such as hikes, volley-ball games, or dinners are also an important part
of the workshop, that shouldn’t be left beside.
Andre van Schaik
I came to the Telluride ’98 workshop with some ideas for projects in which I would try to apply my analog
VLSI building blocks for the auditory pathway as smart sensors for robots (Koala’s) in a real world environ-
ment. I normally do not have the possibility to work with robots, and discovered at the workshop that the
main problem is the interfacing between analog VLSI chips and the digital hardware of the robots. For this
reason I did not actually succeed in using the chips on the robot, but I started a project with the INI lab to
design a silicon cochlea that is easier to interface. A second goal of the workshop was to establish or rein-
force personal links with the people working in our domain. It is hard to meet everybody on a regular basis,
89

especially when living in Australia, but the 3 week workshop allowed me to have good discussions with most
of my peers. It furthermore resulted in some very interesting new contacts.
Charles Wilson
My goal for this workshop was, among other things, to work with Kwabena Boahen to develop an AER
sender board and an AER receiver board. I completed this project, but the sender board did not perform well.
It worked, but barely. The receiver board was finished on the last day of the workshop, so I did not have
enough time to test it properly. Kwabena and I will continue to collaborate on this project after the workshop,
as I intend to use his silicon retina as the front end to my future selective attention efforts. I also hoped to
work on the 1D AER stereo correspondence project, but did not have enough time to work with that group.
In general, I thought the lectures were outstanding and informative. I participated in the Floating Gate
and the AER tutorials, which were both quite good. However, the tutorial part of the AER group didn’t
start until later in the workshop, and we ran out of time to cover all the material. About the only criticism
I have of the workshop was that there just wasn’t enough time each day – with each morning taken up by
lectures, three to four hours each afternoon taken up by the tutorials, and then interesting discussion groups
and presentations in the evenings, there was very little time left for projects. Unfortunately, I can’t really
suggest a solution; I wouldn’t want to cut back on the lectures.
Thomas Zahn
There where two major reasons I applied for the Telluride Workshop 1998. First of all I wanted to improve
my knowledge in the field of neuromorphic implementations as well as the modeling of spiking networks.
Second, I wanted to compare my results to those of the experts and discuss my models with them. But I got
even more out of these days in Telluride.
What I enjoyed most was the personal contact, developing through cooperative work, to make these
robots moving and during the (sometimes endless) discussions. As a result of that, I will start to do joint
work with Shihab Shammas Lab in exchanging biological findings and simulation systems for the auditory
pathway, with Andreas Andreou providing me the front end to my system and Paul Verschure to further
improve the Simulation of large spiking networks in audition. During long discussions with Wolfgang Maas
and especially Timmer Horiuchi and Giacomo Indiveri I found many ways to compare my understanding of
neuromorphic modeling and to continue the exchange of ideas.
There has been a good deal of things I learned or at least understood better during the workshop which
will influence my PhD thesis and the work after quite a bit. Listening even to Rodney Douglas late night
lessons and many scientists working in the field for many years I could develop a general feeling about the
state of the art in neuromorphic modeling and found some good reasons to question and improve my own
models.
Another pleasant surprise, for me as a first time participant, was the truly comprehensive course in applied
AVLSI, taught from the basics up to really useful implementations. Finally I used the chance to set up a
discussion group on the on chip learning issue together with Mario Simoni. During the three meetings there
has been a good chance to go into details, how to implement learning synapses and which learning algorithms
for spiking neurons could be biological motivated.
The major advantage of this workshop to me was the possibility to obtain ideas from biologists and at
the same time having the chance to discuss effective ways of modeling and implementation while speaking
the same language of neuromorphic engineers. I strongly recommended the workshop to my co-workers and
will try to improve my results to have the chance to come back next year.
90

Appendix A
Participants of the 1998 Workshop
Alphabetic list of everybody present at the workshop.
Organizers:
Didier
Depireux,
Institute
for
Systems
Research
-
di-

Avis Cohen, University of Maryland -
Chris
Diorio,
The
University
of
Washington
-
dio-
Rodney Douglas, UNI/ETH Zurich -

Christof Koch, Caltech -
Timothy
R.
Edwards,
Johns
Hopkins
University
-

Terry Sejnowski, Salk Institute -
Mete Erturk, Neural Systems Lab, UMCP -
Shihab Shamma, -
Nicola J. Ferrier, University of Wisconsin-Madison - fer-

Our Telluride Summer Research Center Liason:
Harald M. Fuchs, TU-Graz -
Larry Rosen, Telluride Academy -
Philipp Hafliger, Institute of Neuroinformatics, ETHZ/UNIZ,
Switzerland - hafl
Technical Personnel:
Paul
Hasler,
Georgia
Institute
of
Technology
-

Dave Flowers, Caltech - fl
Chuck Higgins, Caltech -
Reid Harrison, Caltech -
Martin Lades, -
Timmer Horiuchi, Johns Hopkins Univ. -
Daniel D. Lee, Bell Laboratories -
Giacomo Indiveri, UNI/ETH Zurich -
Shih-Chii Liu, Institute of Neuroinformatics, ETH/UNIZ -

David Klein, University of Maryland, College Park - djk-

Yuri Lopez de Meneses, Laboratoire de Microinformatique (LAMI)
- EPFL - fl.ch
Jorg Kramer, UNI/ETH Zurich -
Wolfgang Maass, Technische Universitaet Graz -
David Lawrence, UNI/ETH Zurich -
graz.ac.at
Jonathan
Simon,
Institute
for
Systems
Research
-
jzsi-
Brad Minch, School of Electrical Engineering, Cornell University

-
Theron Stanford, Caltech -
Ania Mitros, -
Regina
Mudra,
Institut
fuer
Neuroinformatik
-
Participants:

Pamela Abshire, -
Thomas Netter,
University of Nice - Sophia-Antipolis -

Ranit Aharonov, Hebrew Univ. at Jerusalem -
David
Nicholson,
CCNR
University
of
Sussex
-
Andreas
Andreou,
Johns
Hopkins
University
-
an-


Masahide Nomura, NEC Fundamental Res.
Labs.
- no-
Asli
Arslan,
The
University
of
Edinburgh
-


Timothy
Pearce,
Tufts
University
Medical
School
-
Randall Beer, -

Kwabena Boahen, UPenn -
Alberto Pesavento, Caltech -
Elizabeth
Brauer,
Northern
Arizona
University
-
Eliza-
Philippe
Pouliquen,
The
Johns
Hopkins
University
-


C. Phillip Brown, Neural Systems Lab - University of Maryland -
Maximilian Riesenhuber, Center for Biological and Computational

Learning and Department of Brain and Cog -
Gert Cauwenberghs, Johns Hopkins University -
Eduardo Ros Vidal,
University of Granada,
Spain - ed-

Jim Clark, McGill University -
Ralf Salomon, AI Lab, Dept. of Computer Science, University of
Marc Cohen, Johns Hopkins University -
Zurich - alomon@ifi.unizh.ch
Craig DeLancey, Indiana University Adaptive Systems Lab - cde-
Nicol Schraudolph,
IDSIA (Istituto Dalle Molle di Studi

sull’Intelligenza Artificiale) -
Steve DeWeerth, Georgia Tech -
Mario
Simoni,
Georgia
Institute
of
Technology
-
Tobi Delbruck, INI, Zurich -

91

Malcolm
Slaney,
Interval
Research
Corporation
-
mal-
Paul Verschure, UNI/ETH Zurich -

Thelma Williams, University of London (St.George’s Hospital
Leslie Smith, University of Stirling -
Medical School) -
Nino Srour, US Army Research Laboratory -
Charles Wilson,
Georgia Institute of Technology - cwil-
Peter
Stepien,
SEDAL,
The
University
of
Sydney
-


Andy Wuensche, Santa Fe Institute -
Alan Stocker, INI -
Thomas P. Zahn, Technical University of Ilmenau Germany -
Mark
W.
Tilden,
Robotics
Research
Scientist
-


Rufin Van Rullen, CNRS - rufi
Andre van Schaik, University of Sydney -
92

Appendix B
Hardware Facilities of the 1998
Workshop

Computer:
ports
(Computer I/O Card) –ZISC based digital neural network board
Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-
ment
Electrometer:
Intel Pentium Computers (Computer) –from the Klab
Intel Pentium Computers (Computer) –From the Intel Lab in Moore
Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-
tion
Intel Pentium Computers (Computer) –From the Intel Lab in Moore
Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-
Clone 166 (Computer) –IBM PC Compatible Computer
tion
Intel Pentium Computers (Computer) –From the Intel Lab in Moore
Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-
tion
Intel Pentium Computers (Computer) –from the Klab
Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-
(Computer) –200MHZ Pentium Pro with Micro DC30 and Pre-
tion
miere
Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-
DEC HiNote (Computer) –Personal Laptop
tion
Toshiba Satellite (Computer) –Personal Laptop
Kiethley 617 (Electrometer) –Electrometer for the chip testing sta-
Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-
tion
ment
Function Generator:
Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-
ment
SRS DS340 (Function Generator) –GPIB Digital Synthesis Func-
Pentium II (Computer) –Laptop
tion Generator
Generic Pentium Pro (Computer) –From the Klab w/ GPIB and
HP ? (Function Generator) –Klab function generator
A/D card
HP ? (Function Generator) –Klab function generator
(Computer) –laptop
HP ? (Function Generator) –A Function Generator for the chip test-
IBM ThinkPad 380XD (Computer) –PC Laptop
ing station
Intel Pentium II (400MHZ) (Computer) –from NASA
HP ? (Function Generator) –A Function Generator for the chip test-
ing station
Intel Pentium Computers (Computer) –From the Intel Lab in Moore
HP ? (Function Generator) –A Function Generator for the chip test-
Intel Pentium II (400MHZ) (Computer) –from NASA
ing station
Intel Pentium II (Computer) –from CNS
HP ? (Function Generator) –A Function Generator for the chip test-
Intel Pentium II (Computer) –from CNS
ing station
Intel Pentium Computers (Computer) –from the Klab
HP ? (Function Generator) –A Function Generator for the chip test-
ing station
Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-
ment
HP ? (Function Generator) –A Function Generator for the chip test-
ing station
Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-
ment
Monitor:
Intel Pentium Computers (Computer) –From the Intel Lab in Moore
Clone Pentium-Pro (Computer) –IBM PC Compatible Computer
NEC Any MultiSync (Monitor) –
Video Monitor (Monitor) –color NTSC monitor
National Instruments PXI (Computer) –Compact PCI based rack
system with DAQ and image acquisition
Multimeter:
Apple Macintosh IIv (Computer) –Mac to run chip-testing equip-
ment
FLUKE ? (Multimeter) –KLlab Fluke
Clone 486-33 (Computer) –IBM PC Compatible Computer –
Fluke 70 (Multimeter) –Fluke for testing station
Drives Hardware Programmers
Fluke 70 (Multimeter) –Fluke for testing station
Computer I/O Card:
Fluke 70 (Multimeter) –Fluke for testing station
FLUKE ? (Multimeter) –KLlab Fluke
National Instruments Lab-PC+ (Computer I/O Card) –General pur-
pose I/O card with a couple of analog and digital
FLUKE ? (Multimeter) –KLlab Fluke
93

Fluke 70 (Multimeter) –Fluke for testing station
Connectix QuickCam (BW) (Parallel Port Peripheral) –Greyscale
Parallel Port Camera
Fluke 70 (Multimeter) –Fluke for testing station
Fluke 70 (Multimeter) –Fluke for testing station
Power Supply:
FLUKE ? (Multimeter) –KLlab Fluke
FLUKE ? (Multimeter) –KLlab Fluke
(Power Supply) –12-14V , 6A power supply
HP BRick (Power Supply) –KLAB
Oscilloscope:
HP BRick (Power Supply) –KLAB
HP (Oscilloscope) –Digital (150MHZ) for chip testing stations
HP BRick (Power Supply) –KLAB
Tek TDS460 (Oscilloscope) –4 channel scope from KLAB
HP BRick (Power Supply) –KLAB
Tek 2445A (Oscilloscope) –2 channel scope from KLAB
HP BRick (Power Supply) –KLAB
Tek 2445A (Oscilloscope) –2 channel scope from KLAB
HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-
TekTronic (Oscilloscope) –Portable Scope
tion
HP (Oscilloscope) –Digital (150MHZ) for chip testing stations
HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-
tion
HP (Oscilloscope) –Digital (150MHZ) for chip testing stations
HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-
HP (Oscilloscope) –Digital (150MHZ) for chip testing stations
tion
HP (Oscilloscope) –Digital (150MHZ) for chip testing stations
HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-
HP (Oscilloscope) –Digital (150MHZ) for chip testing stations
tion
HP BRick (Power Supply) –KLAB
Other:
HP ? (Power Supply) –6v, +20, -20 supply
Pot boxes (Other) –chip-testing stations
HP E3610A (Power Supply) –current limit 12V
Slide Projector (Other) –
HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-
Overhead Projector (Other) –
tion
Pot boxes (Other) –chip-testing stations
HP Brick (Power Supply) –Single 12V Supply for Chip testing sta-
tion
Toshiba PAL/NTSC VCR (Other) –
(Power Supply) –6V, 5A
Analog Devices (Other) –Sharc-Emulator Board
Video Camera (Other) –color camera without tape
Printer:
Pot boxes (Other) –chip-testing stations
Copy Machine (Other) –
? Laser Printer (Printer) –
Pot boxes (Other) –chip-testing stations
Programmable Voltage Source:
Spindler & Hoya (Other) –Optical Set-up for Artificial Nose
TI Activator 2 (Other) –Actel FPGA programmer (with 84PLCC
keithley 230 (Programmable Voltage Source) –Voltagesource up to
adapter)
101V for Chip testing station
JHU (Other) –Soldering and Wire-Wrap station
keithley 230 (Programmable Voltage Source) –Voltagesource up to
101V for Chip testing station
Pot boxes (Other) –chip-testing stations
keithley 230 (Programmable Voltage Source) –Voltagesource up to
Pot boxes (Other) –chip-testing stations
101V for Chip testing station
K-team Color and monochrome video turrets (Other) –Camera’s
keithley 230 (Programmable Voltage Source) –Voltagesource up to
K-team Khepera gripper module (Other) –
101V for Chip testing station
(Other) –Smart Camera System with MAPP2200
keithley 230 (Programmable Voltage Source) –Voltagesource up to
Dataman Dataman 48LV (Other) –Intelligent Universal Program-
101V for Chip testing station
mer
keithley 230 (Programmable Voltage Source) –Voltagesource up to
Microchip Picstart Plus (Other) –PIC programmer
101V for Chip testing station
LAMI/CSEM Panoramic and Stereo Turrets (Other) –Artificial-
Serial Port Peripheral:
retina based turrets for the Khepera robot
? XR-7007 (Other) –Vise
Directed Perception Pan Tilt System (Serial Port Peripheral) –
SONY ? (Other) –4 head VCR for use with video camera
Computer Controlled Pan Tilt System
(Other) –retinas
K-team Khepera microrobot (Serial Port Peripheral) –Robot
? XR-7007 (Other) –Vise
K-team Koala (Serial Port Peripheral) –Mobile robot
? XR-7007 (Other) –Vise
K-Team Khepera (Serial Port Peripheral) –
? XR-7007 (Other) –Vise
(Serial Port Peripheral) –Samurai robot with 1 to 3 grippers (2 de-
grees of freedom and one grip)
Parallel Port Peripheral:
Visual Motion Stimulus:
Timaginarium TX-1000 (Parallel Port Peripheral) –AER research:
1-D retina board that reports edges and direction of
(Visual Motion Stimulus) –OHP w/ a lcd screen for a motion stim-
motion
ulus
94

Appendix C
Workshop Announcement
This announcement was posted on 1/22/98 to various mailing lists and to our dedicated Web-Site.
=======================================================================================================================
"NEUROMORPHIC ENGINEERING WORKSHOP"
JUNE 29 - JULY 19, 1998
TELLURIDE, COLORADO
Deadline for application is February
1., 1998.
Avis Cohen
(University of Maryland),
Rodney Douglas (University of
Zurich and ETH, Zurich/Switzerland), Christof Koch
(California Institute
of Technology), Terrence
Sejnowski (Salk Institute
and UCSD) and
Shihab Shamma
(University of
Maryland)
invite applications for a
three week summer workshop
that will be held
in Telluride, Colorado in 1998 from
Monday, June 29. until Sunday, July 19. 1998.
The
1997
summer workshop on "Neuromorphic
Engineering",
sponsored by
the
National
Science
Foundation, the Gatsby
Foundation and by the "Center
for Neuromorphic Systems Engineering"
at the California Institute
of Technology, was an
exciting
event
and a
great
success.
A
detailed
report
on
the
workshop
is
available
at
http://www.klab.caltech.edu/˜timmer/telluride.html
We strongly
encourage interested
parties to
browse
through these
reports and photo albums.
GOALS:
Carver Mead introduced the
term "Neuromorphic Engineering"
for a
new
field based on
the
design and fabrication
of
artificial neural systems,
such as vision systems, head-eye
systems, and roving robots,
whose architecture and design
principles are based on
those of biological nervous
systems.
The goal of
this
workshop is
to bring
together young
investigators
and more
established
researchers from
academia with their
counterparts
in industry
and national
laboratories, working
on both neurobiological as well
as engineering aspects
of
sensory systems and sensory-motor
integration.
The
focus of
the
workshop
will be
on
"active"
participation, with
demonstration
systems
and
hands-on-experience for all participants.
Neuromorphic engineering has a wide
range of applications
from nonlinear adaptive
control of
complex systems to
the
design of smart sensors. Many of the fundamental principles in this field, such
as the use of
learning methods and the
design of
parallel hardware, are inspired
by
biological systems.
However, existing
applications are
modest and the
challenge of scaling up from small artificial neural networks and designing completely
autonomous systems at the levels
achieved
by biological systems
lies
ahead. The assumption
underlying
this three week
workshop
is
that the
next
generation of neuromorphic systems would benefit from closer attention to the
principles found through experimental and
theoretical studies of brain systems.
FORMAT:
The three week summer workshop will include background lectures, practical tutorials on analog VLSI design, small mobile
robots (Khoala), hands-on
projects, and special
interest groups. Participants are
required to take part and
possibly
complete at least one of the projects proposed (soon to be defined).
They are furthermore encouraged to become involved
in as many of the other activities proposed as interest and time allow.
There will be two lectures in the morning that cover issues that are
important to the community
in general. Because of
the diverse range of backgrounds among the participants,
the majority of these
lectures will be tutorials, rather than
detailed reports of current research.
These lectures will be
given by invited speakers.
Participants will be free to
explore and play with whatever they choose in the afternoon.
Projects and interest
groups meet in the late afternoons,
and after dinner.
The analog VLSI practical tutorials will cover all aspects of analog
VLSI design, simulation,
layout, and testing over
the workshop
of
the three
weeks.
The
first
week covers
basics
of
transistors,
simple circuit
design
and
simulation.
This material is intended
for participants who
have no experience with analog
VLSI. The second week will
focus on
design frames for silicon retinas,
from the silicon
compilation
and layout of
on-chip video
scanners, to
building the
peripheral boards necessary for interfacing
analog VLSI retinas to
video output
monitors.
Retina chips
will be provided. The third week will feature sessions on floating gates, including lectures on the physics of tunneling
and injection, and on inter-chip communication systems.
Projects that are carried
out during the
workshop will be
centered in a number
of
groups, including
active vision,
audition, olfaction, motor
control,
central
pattern generator, robotics,
multichip communication,
analog VLSI
and
learning.
95

The "active perception" project group
will emphasize vision and human
sensory-motor coordination. Issues to be covered
will include spatial localization and constancy, attention, motor planning, eye movements, and
the use of visual motion
information for motor
control.
Demonstrations will include a
robot head
active vision system
consisting of
a three
degree-of-freedom binocular camera system that is fully programmable.
The "central pattern generator" group will focus on
small walking robots.
It will
look at characteristics and sources
of parts for building robots, play with working examples
of legged robots, and
discuss CPG’s and theories of nonlinear
oscillators for locomotion.
It will also explore the use of simple analog VLSI sensors for autonomous robots.
The "robotics" group
will use rovers, robot arms
and working digital vision
boards to
investigate issues of
sensory
motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics.
The
"multichip
communication" project group
will
use existing interchip
communication
interfaces
to program small
networks of artificial neurons
to exhibit particular behaviors
such as amplification, oscillation,
and
associative
memory.
Issues in multichip communication will be discussed.
LOCATION AND ARRANGEMENTS:
The workshop will take place at the Telluride Elementary School located in the
small town of
Telluride, 9000 feet high
in Southwest Colorado, about 6 hours
away from Denver
(350 miles). Continental and United
Airlines provide many daily
flights directly
into Telluride.
All facilities
within
the beautifully renovated public
school building are fully
accessible to participants with disabilities.
Participants will be housed
in ski condominiums, within walking distance
of the school.
Participants are expected to share condominiums. No cars are required.
Bring hiking boots, warm clothes
and a backpack, since Telluride is surrounded by beautiful mountains.
The
workshop
is intended to
be
very informal
and
hands-on.
Participants
are not required
to
have had previous
experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the
brain at the systems level. However, we
strongly encourage active researchers
with relevant backgrounds from academia,
industry and national laboratories to apply, in particular if they are prepared to work on specific projects, talk about
their own work or bring demonstrations to Telluride (e.g. robots, chips, software).
Internet access
will
be provided.
Technical
staff present throughout
the
workshops will assist with
software and
hardware issues. We will have a network of SUN workstations running UNIX, MACs and PCs running LINUX and Windows95.
Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week
workshop.
FINANCIAL ARRANGEMENT:
We have several funding requests pending to pay for most of the costs associated with this workshop.
Different from previous
years, after
notification of
acceptances
have been mailed
out
around March
15.,
1998,
participants are expected to pay a \$250.- workshop fee.
In case of real hardship, this can be waived.
Shared condominiums
will be
provided
for all academic
participants
at no cost
to them.
We expect participant from
National Laboratories and Industry to pay for these modestly priced condominiums.
We expect to have funds to reimburse a small number of participants for up to travel (up to $500 for domestic travel and
up to $800 for overseas travel).
Please specify on the application whether such financial help is needed.
HOW TO APPLY:
The deadline for receipt of applications is February 1., 1998.
Applicants should be
at the level
of graduate students or above
(i.e.
post-doctoral fellows, faculty,
research and
engineering staff and the equivalent positions in industry and
national laboratories).
We actively encourage qualified
women and minority candidates to apply.
Application should include:
1. Name, address, telephone, e-mail, FAX, and minority status
(optional).
2. Curriculum Vitae.
3. One page summary of background and interests relevant to the
workshop.
4. Description of special equipment needed for demonstrations that could
be
brought to the workshop.
5. Two letters of recommendation
=======================================================================================================================
96