.
Report to the National Science Foundation:
W
ORKSHOPS
ON
NEUR
OMORPHIC
ENGINEERING
Monday, June 23 to Sunday, July 13, 1997
Terrence Sejnowski, Salk Institute & UCSD
Christof Koch, California Institute of Technology
and
Rodney Douglas, Institut fur Neuroinformatics, Zurich
November 3, 1997
1

Contents
1 Summary
5
1.1 Speci c Lessons Learned . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5
1.2 Future Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
2 Telluride 1997: The Details
6
2.1 Applications to Workshops . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
3 Local Organization
7
4 Setup and Computer Laboratory
7
5 Workshop Organization: Lectures
9
6 Tutorials
11
6.1 VLSI Basics (Liu, Indiveri and Kramer) . . . . . . . . . . . . . . . . . . . . 11
6.2 Floating Gates (Hasler) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
6.3 Interchip communication (Boahen) . . . . . . . . . . . . . . . . . . . . . . . 11
7 Interest Groups
11
7.1 Robot Workgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
7.1.1 Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
7.1.2 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
7.1.3 Analysis of controllers . . . . . . . . . . . . . . . . . . . . . . . . . . 12
7.1.4 aVLSI-CPG for four legged robot . . . . . . . . . . . . . . . . . . . . 15
7.1.5 aVLSI Learning Walking Robot . . . . . . . . . . . . . . . . . . . . . 15
7.2 Analog VLSI Learning Systems Workgroup . . . . . . . . . . . . . . . . . . 16
7.2.1 Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.2.2 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
7.2.3 Adaptive Adress-Event Router and Retinal-Cortical Maps . . . . . . 17
7.2.4 Comparative Study of Reinforcement Learning Algorithms on Khepera 17
7.2.5 On-Chip Learning on Walking Robots . . . . . . . . . . . . . . . . . 18
7.2.6 Adaptive Analog Liquid Crystal SLM Pixel Design . . . . . . . . . . 18
7.3 Neurobots Workgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.3.1 Learning a task-based representation by error feedback on a mobile
platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
7.3.2 Evaluation of a biologically realistic learning rule, BCM, in an active
vision system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7.3.3 The construction of a model of Lamprey swimming behavior . . . . 21
7.3.4 A model of locust avoidance behavior using an aVLSI retina mounted
on a micro robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
7.3.5 A comprehensive comparison of di erent Reinforcement Learning meth-
ods using real-world systems . . . . . . . . . . . . . . . . . . . . . . 25
7.3.6 A control model for a microrobot for surface recognition . . . . . . . 25
7.3.7 Using aVLSI cochleas on a mobile robot . . . . . . . . . . . . . . . . 25
7.3.8 Development of a hardware model of the rat's sensorium . . . . . . . 26
7.3.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
7.4 Sensory-motor Workgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2

7.4.1 Line Tracking Using a Low-Cost RC Car and a PIC16C74A C . . . 27
7.4.2 Neuromorphic Tracker Using a Pan-Tilt System and a PIC16C74A C 28
7.4.3 Line Tracking Using Position and Motion Measurements on Koala . 28
7.4.4 Optical Flow Algorithm for Controlling Koala . . . . . . . . . . . . 29
7.4.5 Silicon Retina to Koala interface using the AER protocol . . . . . . 30
7.4.6 Line Tracking and Obstacle Avoidance using Koalas . . . . . . . . . 31
7.5 Interchip Communication for Multichip Neuromorphic Systems Workgroup
32
7.5.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7.5.2 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.5.3 Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.5.4 Chip-Design Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.5.5 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
7.5.6 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.5.7 Adaptive Adress-Event Router and Retinal-Cortical Maps . . . . . . 37
7.5.8 1D Retina to 1D Receiver and to Koala Robot . . . . . . . . . . . . 37
7.5.9 On/O Retina to Silicon Cortex . . . . . . . . . . . . . . . . . . . . 37
7.5.10 Computation- and Memory-Based Projective Field Processors . . . . 38
7.5.11 Cochlea to 1D Receiver and to Koala Robot . . . . . . . . . . . . . . 39
7.5.12 Neuronal Networks on Silicon Cortex (SCX) . . . . . . . . . . . . . . 39
7.6 Active Vision Workgroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
7.6.1 Local Motion Detection . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.6.2 Orientation Selectivity . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.6.3 Smooth Optical Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.6.4 Future Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.7 Attention and Selection Discussion Group . . . . . . . . . . . . . . . . . . . 43
7.7.1 Computational Role of Selective Attention . . . . . . . . . . . . . . . 43
7.7.2 Neuromorphic Implementation . . . . . . . . . . . . . . . . . . . . . 43
7.7.3 Limits are good for you . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.7.4 Winner Take All and Company . . . . . . . . . . . . . . . . . . . . . 44
7.7.5 Selective Attention and Beyond . . . . . . . . . . . . . . . . . . . . . 45
7.7.6 Critique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.8 Auditory Processing Discussion Group . . . . . . . . . . . . . . . . . . . . . 46
7.9 Locomotion Discussion Group . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.9.1 Walking Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
7.9.2 Walking Robot with aVLSI CPG controller . . . . . . . . . . . . . . 48
7.9.3 Theory demonstrated by aVLSI hardware . . . . . . . . . . . . . . . 48
7.9.4 System Implementation using Khepera Robots . . . . . . . . . . . . 48
7.9.5 Future Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.10 Motion Discussion Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.10.1 Talks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
7.10.2 Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.10.3 Goals for Next Year . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.11 \Moving Towards Industry" Discussion Group . . . . . . . . . . . . . . . . . 50
7.11.1 Summing up the main points of the discussion . . . . . . . . . . . . 51
7.11.2 Message to take home . . . . . . . . . . . . . . . . . . . . . . . . . . 52
8 Appendix 1: Participants of the 1997 Workshop
54
3

9 Appendix 2: Workshop Announcement
56
4

1 Summary
Neuromorphic engineering is a new eld of engineering that is based on the design and
fabrication of arti cial neural systems, such as vision chips, head-eye systems, and roving
robots, whose architecture and design principles are based on those of biological nervous
systems. The goal of our annual workshop is to bring together young investigators and more
established researchers from academia with their counterparts in industry and national
laboratories, working on both neurobiological as well as engineering aspects of sensory
systems and sensory-motor integration.
During three weeks in June and July of 1997, the fourth \Neuromorphic Engineering"
workshop was held at the Telluride Summer Research Center (TSRC) in Telluride, Colorado.
The workshop was directed by Profs. T. Sejnowski from the Salk Institute and UCSD,
C. Koch from Caltech and R. Douglas from the University of Zurich and the ETH in
Zurich, Switzerland and included 8 sta and technical assistants, drawn from the various
laboratories involved. The workshop hosted 52 participants from academia, government
laboratories, and industry, whose backgrounds spanned physics, robotics, computer science,
neurophysiology, psychophysics, electrical engineering, and computational neuroscience (see
Appendix 1 for a complete listing).
1.1 Speci c Lessons Learned
Telluride 1997 was a positive evolution of the previous workshops (see last year's report).
Given the somewhat crowded nature of the 1996 workshop, we reduced the total number of
participants to 63 total (including all personell). We stayed with the proven three week for-
mat. The content of the workshop continued to emphasize non-visual sensors and projects.
We further expanded the project groups which now form a major part of the "Telluride"
experience. The additional two days setup period (from three days in the previous year
to ve days) plus the two full-time systems administrator from our laboratories proved of
immense help in having a fully functional computer network, with data-logging stations for
the integrated circuits in place on time.
Because of the intense nature of workshop, running more-or-less non-stop for 21 days and
usually far into the nights, the cohesiveness and joint sense of purpose and of methodology
in the neuromorphic engineering community has been considerably enhanced. A number
of long-term collaborations have emerged from the interactions at the workshop. From its
start at a handful of universities, in particular at Caltech, neuromorphic engineering has
now grown to encompass a web of researchers at many universities and companies, both in
the US as well as in Europe.
Overall, the meeting was a resounding success and everybody is looking forward to
Telluride '98 and beyond. Part of the success was due to the hands-on introductory tutorials
we o ered to familiarize everybody with basic concepts thanks to the heroic e orts of
Paul Hasler ("Floating Gates"), Shih-Chii Liu, Giacomo Indiveri and Jorg Kramer ("basic
aVLSI") and Kwabena Boahen ("Interchip Communication").
Another success story was the emergence of a small mobile robot equipped with a
Motorola 68C311 processor as a common platform for neuromorphic engineers. Called
Koala, they are produced and marketed by K-team, in Lausanne/Switzerland (they also
fabricate the smaller microrobot Khepera). Both systems are based on the Motorola 68331
processor and are supported by a transparent programming interface, which is compatible
between the two platforms. Ample possibilities are provided to interface these robots with
5

external devices using digital or analog ports. The robots can be used in both a tethered
mode or autonomously by downloading crosscompiled programs over a serial port.
Under the guidance of Paul Verschure, many of the Telluride participants managed to
implement their systems onto one of the four Koala's we had brought with us. Indeed, as
detailed in the reports, we managed to port both visual as well as auditory systems onto
the same Koala. Many of the laboratories attending the Telluride workshop are planning on
buying them for their own laboratory. This will lead to a modest degree of standardization
which can only bene t the eld as a whole.
1.2 Future Aims
Our long-term aim for the Telluride workshop is the following.
To stabilize the eld by continuing to nurture cross-disciplinary collaborations. This
means keeping the project oriented structure of the course.
To keep on growing the community by teaching graduate students, post-doctoral
fellows and others the necessary, by nature very interdisciplinary, skills. Thus, we
will keep o ering the (labor-intensive) basic tutorials in future years.
To prevent intellectual stagnation by replacing the directors of the workshop with
younger researchers over the next years.
On a more practical level, because the eld of "neuromorphic engineering" has now
been rmly established, we will begin to charge participants to the Telluride workshop
a course fee, thereby reducing the expenses of the course.
2 Telluride 1997: The Details
Much of the basic organization for Telluride 1997 was identical to the organization we had
in place in previous years, except that everything was more streamlined.
2.1 Applications to Workshops
We announced the workshop via email to previous workshop participants as well as to
various mailing lists on January 10. 1997. It was sent to
Includes the Southern California neural network, neuro-
morphic engineering community.
International mailing list with at least 1,000 subscribers in
the neural network/connectionist area.
Mailinglistfor computational neuroscience, primarilythe group
that attends the annual CNS Meetings in July.
The text is listed in Appendix 2. We also announced the workshop via our existing
Telluride home-page on the World Wide Web.
We received 59 applications, of which we selected 35 for the workshop. We also invited
a few key faculty directly to the course. The number of well-quali ed applicants was high,
6

so that many applicants were not accepted who could have made good participants. The
selection of participants for the workshop was made by the three main organizers, all of
whom received copies of the full applications (R. Douglas, C. Koch and T. Sejnowski).
We selected participants who had demonstrated a practical interest in neuromorphic
engineering, had some background in psychophysics and/or neurophysiology could con-
tribute to the teaching and the practicals at the workshop could bring demonstrations to
the meeting were in a position to in uence the course of neuromorphic engineering at their
institutes or were very promising beginners. Finally, we were very interested in increasing
the participation of women and under-represented minorities to the workshop and actively
encouraged applicants from companies. The travel for participants from companies was
paid by the companies.
The nal pro le of the workshop was satisfactory (see Appendix 1). The majority
of participants were advanced graduate students, post-doctoral fellows or young faculty
three participants came from National Laboratories and two from Industry. Of the 52
participants, 34 were from US institutions, with the remainder coming from Switzerland
(5), Germany (4), Australia (3), United Kingdom (2), Italy (1), Russia (1), Spain (1) and
India (1). 8 of the participants were women and 3 Afro-Americans.
We reduced the Caltech contingent to 6 participants: the Co-PI, the system administra-
tor as well as two technical assistants (graduate students) from the laboratory of the Co-PI
as well as two graduate students from the laboratory of Prof. Carver Mead who organized
the all important tutorials.
3 Local Organization
The workshop itself took place in a old, but beautifully renovated public school near the
center of town. We had four large rooms available for our workshop, one for the talks
and tutorials, one for the computers, an olfactory system and related equipment and for
the eye movement tracking setup, one for the circuit testing beds, and one for building,
programming and experimenting with the rovers.
All the housing arrangements were carried out by the sta of the Telluride Summer
Research Center (TSRC). Because of their long-term contracts with local condominiums,
we obtained perfectly adequate housing at reasonable rates. The TSRC also rented the
school and provided us with a very able local assistant for Xeroxing, buying supplies as well
as for local public-relations.
We interacted with the local community by marching under the banner of "Neuromor-
phic Engineering" in the local 4-th of July parade (we won second place for our imaginative
rendering of "Neuromorphic Engineers" as well as for our celebration of the simultaneous
landing of Path nder on Mars during the Parade) and by giving three public talks and an
additional presentation exclusively aimed at children (between the ages of 8 and 14).
4 Setup and Computer Laboratory
We extended the setup time for our entire software/hardware environment from the evening
of Tuesday, June 17. until Saturday, June 21. This gave us one free day (Sunday, June
22.). This, and the fact that we took|at our own cost|the systems administrators of two
of our laboratories with us, proved to be key.
7

Due to our desire to provide a workshop which was interactive we assembled a fully-
functional inter-networked computer laboratory which was connected to the Internet with a
Switched-56 communication line. The computer system consisted of twenty-three computers
which included seven Sun Sparc workstations, eight Pentium PC's, one SGI Indy, one
PowerPC and six Macintoshes. All computers where inter-networked with one common
server and two laser printers. A Sun Sparc workstation was utilized as the Server providing
le, print, and backup services, along with all the general Internet services like Web, ftp,
email, telnet, etc. The rest of the computers where divided into three usage areas. The
rst was a general computer lab area which consisted o nine computers. They where
mainly used for general computer use, like running simulations, designing circuits, running
demos, writing and general Internet access. The second set of computers (seven) where
used to interface with the robotics equipment. Depending on the application they generally
collect data from the robots, process it and then used the processed information to control
the behavior of the robot. The remaining seven computers where used in the VLSI test
stations. We also ran networking cables to interconnect the four classrooms used by the
workshop as well as networking cable within each of the rooms. We had computer and
networking equipment located in each of the four rooms.
Di erent from the previous workshops, gaining Internet access in Telluride was less of a
challenge. This year we were able to take advantage of the new infrastructure of the school
which has a T1 connection to the internet and twisted pair Ethernet cables connecting the
di erent rooms of the schoolhouse together. Colorado Supernet provided secondary Internet
Domain Name Service for the workshop.
Each participant at the workshop was given an account from which they could read and
send electronic mail and transfer demonstration programs or operate them from their home
computers through the network. Standard software was also available including various
simulation and design packages which were speci cally requested before the workshop such
as NEURON, GENESIS, ANALOG, L-edit, Matlab and Labview.
Throughout the entire course, we supported workstations, robots, oscilloscopes and
other test devices and interfaces brought by all participants.
We also set up a World Wide Web site describing the workshop, its aims and its funding
and participants (including their homepages). This Web site also includes information about
the 1994, 1995 and 1996 workshops . This information can be accessed under the URL:
1
http://www.klab.caltech.edu/~harrison/telluride/telluride.html.
Overall, the computer lab was a success and all participants used the system to demon-
strate the software they work with and taught others to use it as well.
A large van was rented in Pasadena, loaded with computers from various CNS labora-
tories at Caltech and was driven to the Salk Institute to pick up additional computers. The
van was driven to Telluride. In Telluride, heavy duty extension cords were strung from six
other rooms to provide enough power for all of the computers. At the end of the course,
these computers were delivered via the van to La Jolla and Pasadena. Robots and some
computers were also shipped in from Zurich by air express.
1We strongly recommend that the interested reader scan this homepage. It contains many photos from
the workshop, reports, lists of all participants etc.
8

5 Workshop Organization: Lectures
The activities in the workshop were divided into three categories: formal lectures, tutorials
and interest group meetings.
The lectures were attended by all of the participants. Lectures were presented at a
su ciently elementary level to be understood by everyone, but were, nevertheless, com-
prehensive. It was found that two rather than three lectures in the morning session were
optimal to give adequate time for questions. The syllabus of the course was as follows:
o Audition in aVLSI (ANDREOU)
* Afternoon workshops:
Monday 23 June
o 13:00 - 15:00 Computerless Robots Lab (VLSI Room)
o 14:00 - 16:00 Sensory-Motor Systems Lab (Robot Room)
* Arrive in Telluride
o 14:00 - 16:00 aVLSI Learning Systems (Lecture Room)
* Check-in to apartments
o 16:00 - 18:00 Interchip Communication Lecture --
* Evening:
Handshaking/Pipelining (Lecture Room)
o 17:00 Welcome cocktail party at Wendy Brook's house
* Evening:
o 19:00 Orientation and introduction to workshop facilities
o 20:00 - 22:00 Interchip Communications Discussion Group
(KOCH)
(Lecture Room)
Tuesday 24 June
Sunday 29 June
* Morning lectures:
* Free
o Basic Biophysics and Neuron Models (KOCH)
o Neurons in Silicon (DOUGLAS)
Monday 30 June
* Evening:
o 19:30 Introductions by participants
* Morning lectures:
o Silicon Sensors (DELBRUECK)
Wednesday 25 June
o Representation of Odorant Molecular Structure by the
Olfactory System from Biology to an Artificial Nose
* Morning lectures:
(KAUER)
o Neuromorphic analog VLSI for industrial applications
* Afternoon workshops:
(INDIVERI)
o 13:00 - 15:00 Computerless Robots Lab (VLSI Room)
o Neuromorphic Modeling in the Fly Visual System (LIU)
o 14:00 - 16:00 Basic aVLSI Course (VLSI Room)
* Afternoon workshops:
o 14:00 - 16:00 Interchip Communication Design Lab
o 14:00 - 16:00 Basic aVLSI Course (VLSI Room)
(Computer Room)
o 16:00 - 17:00 Further Participant Introductions
o 16:00 - 18:00 Floating Gate Lab - Introduction to
* Evening:
Floating Gates (Lecture Room)
o 19:30 Workshop/Course Organizational meeting
* Evening:
o 20:00 - 22:00 Learning in Neurobots (Robot Room)
Thursday 26 June
o 20:00 - 22:00 Motion Discussion Group (Lecture Room)
o 20:00 - 22:00 Interchip Communication Design Lab
* Morning lectures:
(Computer Room)
o Circuits in Neocortex (DOUGLAS)
o 21:30 - ??:?? Auditory Processing Lab (Robot Room)
o Auditory Physiology I (SHAMMA)
* Afternoon workshops:
Tuesday 1 July
o 13:00 - 15:00 Computerless Robots Lab Organizational
Meeting (Lecture Room)
* Morning lectures:
o 14:00 - 16:00 Basic aVLSI Course (VLSI Room)
o Compact aVLSI Velocity Sensors (KRAMER)
o 16:00 - 17:00 Pathfinder Rover Talk (HARRISON)
o Analog VLSI Model of the Fly Elementary Motion Detector
(Lecture Room)
(HARRISON)
* Evening:
* Afternoon workshops:
o 17:00 BBQ at Telluride Lodge
o 13:00 - 15:00 Computerless Robots Lab (VLSI Room)
o 20:00 - 21:00 Public Lecture: Spinal Cord Regeneration
o 14:00 - 16:00 Basic aVLSI Course (VLSI Room)
(COHEN)
o 16:00 - 18:00 Interchip Communication Lecture --
o 21:30 - 22:00 Auditory Processing Group Organizational
Arbitration (Lecture Room)
Meeting (Lecture Room)
o 16:00 - 18:00 Learning in Neurobots (Robot Room)
* Evening:
Friday 27 June
o 20:00 - 22:00 aVLSI Learning Systems (Computer Room)
o 20:00 - 22:00 Sensory-Motor Systems Lab (Robot Room)
* Morning lectures:
o 20:00 - 22:00 Locomotion Discussion Group (Lecture Room)
o Central Pattern Generators (COHEN)
o Open
Wednesday 2 July
* Afternoon workshops:
o 13:00 - 13:30 Interchip Communication Organizational
* Morning lectures:
Meeting (Lecture Room)
o Reinforcement Learning in aVLSI (CAUWENBERGHS)
o 13:00 - 15:00 Computerless Robots Lab (VLSI Room)
o Memory in Silicon Systems (HASLER)
o 14:00 - 16:00 Basic aVLSI Course (VLSI Room)
* Afternoon workshops:
o 14:00 - 16:00 Auditory Processing Lab (Robot Room)
o 13:00 - 15:00 Computerless Robots Lab (VLSI Room)
o 16:00 - 18:00 Interchip Communication Lecture --
o 14:00 - 16:00 Basic aVLSI Course (VLSI Room)
Introduction (Lecture Room)
o 14:00 - 16:00 Auditory Processing Lab (Robot Room)
o 18:00 - 18:30 Sensory-Motor Lab Organizational Meeting
o 14:00 - 16:00 Interchip Communication Design Lab
(Lecture Room)
(Computer Room)
o 18:00 - 18:30 Learning in Neurobots Organizational
o 16:00 - 18:00 Floating Gates Lab -- E-Pots (Lecture Room)
Meeting (Lecture Room)
o 16:00 - 18:00 Learning in Neurobots (Robot Room)
* Evening:
* Evening:
o 19:30 Industrial Applications Discussion Group
o 19:00 - 20:00 Locomotion Discussion Group -- aVLSI CPG
(Lecture Room)
(PATEL) (VLSI Room)
o 21:00 Active Vision Lab (Computer Room)
o 20:00 - 22:00 aVLSI Learning Systems (Computer Room)
o 20:00 - 22:00 Motion Discussion Group (Lecture Room)
Saturday 28 June
o 20:00 - 22:00 Interchip Communication Design Lab
(Computer Room)
* Morning lectures:
o 22:00 - ??:?? Selection, Attention, and Intention Group
o Auditory Physiology II (SHAMMA)
(meet in front of Lecture Room)
9

Talk and Demo (MORSE) (Lecture Room)
Thursday 3 July
Tuesday 8 July
* Morning:
o 8:00 River Rafting Trip (am)
* Morning lectures:
* Morning lectures:
o Place Cells in the Hippocampus and Neural Coding (SEJNOWSKI)
o Audition in aVLSI II (ANDREOU)
o Blind Separation and Analysis of fMRI Data (SEJNOWSKI)
o Translating Olfaction to Artificial Systems (WILSON)
* Afternoon workshops:
* Afternoon:
o 14:00 - 16:00 Basic aVLSI Course (Computer Room)
o 13:00 River Rafting Trip (pm)
o 14:00 - 16:00 Interchip Communication Protocol Discussion
* Afternoon workshops:
(Lecture Room)
o 13:00 - 15:00 Computerless Robots Lab (VLSI Room)
o 16:00 - 18:00 Floating Gates Lab (VLSI Room)
o 14:00 - 16:00 Basic aVLSI Course (VLSI Room)
o 16:00 - 18:00 Attention Discussion Group (Meet in
o 14:00 - 16:00 aVLSI Learning Systems (Lecture Room)
front of Lecture Room at 16:00)
o 16:00 - 18:00 Sensory-Motor Systems Lab (Robot Room)
* Evening:
* Evening:
o 19:00 - 20:00 Locomotion Discussion Group -- Collective
o 17:00 - 20:00 BBQ at Telluride Lodge
Project Discussion (CPG for 4-legged
o 19:00 - 20:00 Locomotion Discussion Group -- Coupled
walking robot) (Gennady, Girish, Susanne)
Oscillators (Gennady) (Lecture Room)
(Lecture Room)
o 20:00 - 21:00 Public Lecture: Towards the Neural Basis
o 20:00 - 22:00 aVLSI Learning Systems (Computer Room)
of Consciousness (KOCH)
o 21:00 - 22:00 Learning in Neurobots (Robot Room)
Wednesday 9 July
o 21:00 - 23:00 Interchip Communication Lecture --
System Level (Lecture Room)
* Morning lectures:
o Selective Visual Attention (KOCH)
Friday 4 July
o The Represenation of Space in Cortex (ANDERSEN)
* Afternoon workshops:
* 11:07 Pathfinder Lander/Rover Lands on Mars!
o 14:00 - 16:00 Basic aVLSI Course (Lecture/Computer Room)
o 16:00 - 18:00 Floating Gates Lab (VLSI Room)
* Fun:
* Evening:
o 11:00 Independence Day Parade & BBQ lunch
o 19:00 - 20:00 Locomotion Discussion Group -- Final
* Afternoon workshops:
Presentations
and Discussion (Lecture Room)
o 13:00 - 15:00 Computerless Robots Lab (VLSI Room)
o 20:00 - 22:00 Motion Discussion Group (Lecture Room)
o 14:00 - 16:00 Basic aVLSI Course (VLSI Room)
o 20:00 - 22:00 aVLSI Learning Systems (Computer Room)
o 14:00 - 16:00 Auditory Processing Lab (Robot Room)
o 22:00 - 23:00 ICA for Beginners (LEE) (Lecture Room)
o 14:00 - 16:00 Interchip Communication Design Lab
(Computer Room)
Thursday 10 July
o 14:00 - 16:00 Learning in Neurobots (Lecture Room)
o 16:00 - 18:00 Floating Gates Lecture (
* Morning lectures:
Lecture and Computer Room)
o Attention and Intention in Parietal Cortex (ANDERSEN)
* Evening:
o Reconfiguration in Field-Programmable System on Chip
o 19:00 - 20:00 Locomotion Discussion Group -- Designing
(MADRENAS)
a Robot Lamprey (Lecture Room)
* Afternoon workshops:
o 20:00 - 22:00 Learning in Neurobots (Robot Room)
o 12:00 Synchronization of Spikes in Visual Processing
o 20:00 - 22:00 Interchip Communication Design Lab
Discussion Group (Meet in front of Lecture Room)
(Computer Room)
o 12:00 The Future of Analog VLSI Motion Processing
o After Dark: Independence Day Fireworks
Discussion Group (Meet in front of Lecture Room)
o 14:00 - 16:00 Basic aVLSI Course (Computer Room)
Saturday 5 July
o 14:00 - 16:00 aVLSI Learning Systems (Lecture Room)
o 16:00 - 17:00 Olfaction and Sensorimotor System Demo,
* Morning lectures:
aka Sniff and Locate (WILSON/BLOM)
o Biological Wide Dynamic Range Vision - What Can We
(Lecture/Computer Room)
Build in Comparison (YADID-PECHT)
* Evening:
o Multichip Large-Scale Neuromorphic Systems (BOAHEN)
o 17:00 - 20:00 BBQ
* Afternoon workshops:
o 19:00 - 20:00 Locomotion Discussion Group -- Final
o 13:00 - 15:00 Computerless Robots Competition
Presentations and Discussion (Lecture Room)
o 15:00 - 16:00 Silicon Neuron Demo (RASCHE)
o 20:00 - 21:00 Public Lecture: Mars Pathfinder and the
(Robot Room)
Future of Robots in Space Exploration
o 16:00 - 18:00 Sensory-Motor Systems Lab (Robot Room)
(HARRISON)
o 16:00 - 18:00 aVLSI Learning Systems (Lecture Room)
o 21:00 - 22:00 Discussion on Optimization with Neural
* Evening:
Networks (JAYADEVA) (Lecture Room)
o Optional overnight mountain hike
o 22:00 - 23:00 Synchronization of Spikes in Visual Processing
Discussion (Lecture Room)
Sunday 6 July
Friday 11 July
* Free day
* Evening:
* Morning:
o 20:00 Review of progress (Lecture Room)
o Group presentations/demos
* Afternoon:
Monday 7 July
o Close labs, pack equipment
o 18:00 ALL COMPUTERS IN ROBOT ROOM ARE SHUT DOWN!
* Morning lectures:
* Evening:
o Oculomotor Control Systems (SEJNOWSKI)
o 21:00 Dinner with Award Ceremony at Rustico
o Neuromorphic Saccadic Systems (HORIUCHI)
* Afternoon workshops:
Saturday 12 July
o 14:00 - 16:00 Basic aVLSI Course (Lecture/Computer Room)
o 14:30 - 16:00 Interchip Communication Protocol Discussion
* Morning & Afternoon
(Lecture
Room)
o 11:00 ALL COMPUTERS ARE SHUT DOWN!!!
o 16:00 - 18:00 Learning in Neurobots (Lecture Room)
o Individual group reports
o 16:00 - 18:00 Floating Gates Lab (VLSI Room)
* Evening:
Sunday 13 July
o 19:00 - 20:00 Locomotion Discussion Group -- Designing
a Robot Lamprey (Lecture Room)
* Check-out and departure
o 20:00 - 22:00 Motion Discussion Group (Lecture Room)
o 22:00 - 22:30 C. Elegans Chemotaxis Robot Project
10

6 Tutorials
The three tutorials were an opportunity for participants to acquire hands-on experience with
the fundamental technology upon which Neuromrophic Engineering is based. They were
crucial for transferring practical knowledge among the participants, especialy to newcomers
who lacked basic background but who were eager to learn.
6.1 VLSI Basics (Liu, Indiveri and Kramer)
This practical included lectures, experiments on single transistors, experience with layout
and design rules and with analog simulation. Most participants found that the three days
allocated to this practical were insu cient to develop facility with the material, although
they did report that they learned enough to go o and study on their own.
6.2 Floating Gates (Hasler)
This practical consisted of extensive lectures on the physics of hot electron injection, tun-
neling, and high-voltage circuits for the control of oating gates. An excellent collection
of notes was provided. Experiments were done with oating gate chips and the use of
pulse-modulation for incremental control of oating gate voltage was investigated.
6.3 Interchip communication (Boahen)
Kwabena Boahen gave a comprehensive series of lectures on pulse generation and in-
tegration, formalism for self-timed communication protocols, and intra-chip arbitration.
One-dimensional retinal sender-receiver systems were available for hands on investigation,
as well as two-dimensional systems for demonstration. Layout and silicon compilers for
one-dimensional scanners and receivers is available by anonymous ftp from Caltech (ho-
biecat.pcmp.caltech.edu). Adrian Whatley and Rodney Douglas brought their Silicon Cor-
tex system that allowed multiple (AER) neuromorphic chips to be connected. In their
demonstration, Whatley connected a 2D retina sender to a multi-neuron chip and pro-
grammed the chip-to-chip connectivity to obtain orientation selectivity.
7 Interest Groups
The interest group meetings gave people with common interests a chance to get together
and discuss their area in detail, to establish the pressing questions in that area, to determine
the state of the art, and to make plans for future developments. We here include reports of
these groups. Some of these groups spent many nights agonizing over the various technical
points, only brie y summarized in the following.
7.1 Robot Workgroup
Organizers: Mark Tilden and Susanne Still
Seven solar powered wheeled robots and 5 recon gurable walking robots of di erent
complexity were provided. All these machines were designed such that it was easy for
participants to change their behavior by changing determining parameters such as time
constants, coupling resistances and sensory coupling strengths. The walking robots also
11

o ered the possibility of changing the complexity of the controller by using a variable
number of coupled oscillators. Facilities to build simple walking robots were also available .
7.1.1 Teaching
About 20 to 25 persons attended this workshop. The principles of the technique used in
the robots were explained in detail. The participants were given the full layout of the solar
rovers and instructed how to modify the controllers by changing the parameters mentioned
above. Everyone got a chance to work on the robots to fully understand the technique by
observing how changes in the controller lead to changes in the behavior of the robot.
About 50% of the attendants progressed towards investigation of the walking robots and
were instructed in how to modify these. Here not only the controllers could be changed,
but also some machines were capable of having their mechanics (leg designs) changed. This
gave people the opportunity to learn not only about the types of controllers that can drive
legged machines, but also about the mechanics necessary for legged locomotion.
Charles Higgins built a simple 4-legged robot using the basic controller 1] we o ered, and
a simple mechanical design replicating roughly one of Tildens existing machines. Higgins
quickly became familiar with the control as well as mechanical problems involved in building
such a machine and became able to solve most of them during the workshop.
Jayadeva was also instructed in building a 4-legged robot. Although he didn't nish
it during the workshop, he is going to do so in the future and is taking ideas as well as
hardware from the workshop lab with him.
7.1.2 Projects
The projects that evolved this year were partly grounded in work done at the 1996 Telluride
Workshop.
7.1.3 Analysis of controllers
Participants: S.Still and M.W. Tilden
As a continuation of the collaborative work started at the 1996 Telluride Workshop,
more complex controllers recently developed by Tilden were analyzed mathematically by
Still and the predicted results were compared to data taken from the controllers to prove the
validity of the approach. Figure 1 shows a controller consisting of 3 oscillating units one of
which is called the master oscillator. This unit consists of an independent resistor (R) while
the others (called slave oscillators) are coupled to the master through resistors (R to R )
1
4
that determine not only the phase di erence between the oscillators but also the duty cycle
of the slaves. The slave oscillators are coupled in a serial fashion and more can be added
likewise. It is possible to calculate the frequency of the master oscillator, which determines
the frequency of the slave oscillators from the di erential equations for the voltages. Phase
di erence between master and slave oscillator depends on the coupling resistance and the
duty cycle of a slave oscillator on the ratio between the coupling resistances. These values
can also be obtained using the solutions to the di erential equations. As an example we
would like to show how the frequency of the master oscillator can easily be calculated and
how the measured data compares to the result.
The di erential equations for V and V :
1
2
_Vi + 1Vi = 1Vj + _V ini
12

R
V1
V2
C
C
1
2
R1
R2
C
C
3
4
R3
R4
C
C
5
6
Figure 1: 'Unicore' controller. Used for control of 2 motor devices with local and distal
sensory input.
13

Frequency/Hz
100
90
80
70
60
50
40
30
20
10
0
0.5
1
1.5
2
R/MOhm
Figure 2: Frequency of 'master oscillator' versus its resistance.
(where i, j = 1, 2 and = RC) yield the time during which the input voltage of
f
g
f
g
the inverters is above threshold:
T = 12RCln Vmax
iVthi
(where V max
i
is the maximum voltage at Vi and V th
i the threshold voltage of inverter i.
i =1, 2). This time is the signal time and we therefore obtain the frequency
=
1
RC ln Vmax
V th
( assuming that V max = V max and V th = V th).
1
2
1
2
Figure 2 shows the predicted curve (line) and the measured data (circles).
Since the capacitances are kept constant, the entire behavior of the robot depends
only on the values of the resistances and can also be controlled thereby. R changes the
frequency of the oscillators and this changes the frequency of the leg movements. The
coupling resistances change the phase lags between the oscillators, which changes the phase
di erences between the legs and therefore the gait of the robot. The ratio Rci=Rci sets the
+1
duty cycle of slave i and can therefore determine turning behavior. The detailed in uence
of the resistive values to the behavior depends on how the controller is used in a particular
machine and also on how many coupled oscillators and how many motors are used. In any
case though a minimal number of parameters is su cient to control and describe a walking
machine created with this technology.
14



7.1.4 aVLSI-CPG for four legged robot
Participants: G.Cymbaluk, G.Patel and S.Still
(a)
(b)
Figure 3: Robot driven by an aVLSI implementation of a Central Pattern Generator (CPG)
using Morris-Lecar neuron circuits (a) lateral view (b) top view.
Patel brought aVLSI Morris-Lecar neurons to the workshop We intend to use these
neurons to implement a CPG for an arti cial lamprey. Four of these oscillating neurons were
used as a CPG for a four-legged walking robot, that was built at the workshop by Still using
Tilden's technology. The robot had one degree of freedom per leg and was mechanically
capable of expressing any desired gait. Figure 3 shows a picture of it. Possible phase
lags between the Morris-Lecar neurons, depending on the coupling strength, were studied
by Cymbaluk and Patel. These phase di erences directly control the walking gaits of the
robot. At the workshop basic gaits like trot and pronk were induced. This collaboration
will continue in the future. We want to study gait transitions and problems involved (for
instance stability).
7.1.5 aVLSI Learning Walking Robot
Participants: G.Cauwenbergs and S. Still
Since the technology introduced by Tilden uses very few parameters to control direction,
walking gait and speed, a controller in this style could be used as a translator between
parameters (voltages) and behavior. We propose implementing a learning algorithm in
aVLSI that enables the walking robot to optimize its velocity and stability as well as its
battery charge in a given environment with respect to navigating through this environment
(obstacle and cli avoidance, and light source seeking if solar cells are used for recharging
the battery). The plan for this project, which is to be realized after the workshop, is to rst
15

build an aVLSI walking controller based upon previous work done by Still and Tilden that
contains A/D, D/A converters provided by Cauwenbergs, and to put this controller onto a
four legged walking robot. The robot can then be interfaced to a computer and simulations
of several learning algorithms can systematically be tested as well as optimized. After this
procedure the optimal solution will be implemented in an aVLSI chip (also containing the
original walking controller), which will then be responsible for the robot's ability to learn
and its performance, making it also truly autonomous.
7.2 Analog VLSI Learning Systems Workgroup
Organizers: Gert Cauwenberghs and Marwan Jabri
Mechanisms of learning and adaptation are as important to the design of engineered neu-
romorphic systems as they are to biological systems, which are known to perform robustly in
variable and unpredictable environments. Furthermore, adaptation in analog VLSI provides
a means to compensate for component mismatches, noise, and other sources of error in the
physical implementation, and thus allows to perform precise computations with imprecise
components.
The goal of this workshop was to cover both the basics of memory, adaptation and
learning in aVLSI, as well as more advanced aVLSI design projects and live demonstrations
of on-line learning in neuromorphic VLSI systems.
This workgroup met for two weeks at the workshop, in the form of lectures, laboratory
tutorials and cross-disciplinary projects in collaboration with other workgroups. The at-
tendance of the lectures and tutorials varied between 5 and 15 people. Five of the members
where active in the cross-disciplinary projects.
7.2.1 Teaching
The lectures covered aVLSI learning, adaptation and memory in three afternoon sessions:
Learning algorithms and scalable architectures
VLSI technology and implementation
System examples
There were also morning lectures on "reinforcement learning in aVLSI" and "memory
in silicon systems."
Laboratory tutorials involved comparing performance of learning algorithms by com-
puter simulation, physical layout of VLSI learning cells, and experiments with integrated
learning circuits. Experiments were conducted on ACTS, an Analog Circuit Test Station
for multi-channel computer-controlled signal sourcing and acquisition. Chips used include
a reinforcement learning mean-rate coding modulator, and a chip with test structures of
analog memories and adaptive circuit elements.
The layout of the VLSI circuits used in the tutorials are available by ftp at bach.ece.jhu.edu,
pub/gert/chips. Simulation software and other les are available as well (home pages of the
organizers).
7.2.2 Projects
In the spirit of the workshop, projects were collaborative and cross-disciplinary. Two types
of projects were undertaken: system-level projects coordinated with other workgroups at the
16

workshop, and long-term collaborative initiatives of VLSI designs to be fabricated later and
used at the next neuromorphic workshop. The current results of these projects are described
below. All of these projects will be continued by the participants after the workshop.
7.2.3 Adaptive Adress-Event Router and Retinal-Cortical Maps
Participants: D. Klein, T. Stanford, G. Cauwenberghs, K. Boahen, A. Andreou
and T. Hinck
(See also the Interchip Communication Workgroup)
Address-event representations are an e ective means to communicate neural activity and
implement large-scale connectivity, over a digital bus. What has been lacking is a way to
use the communicated addresses to learn connectivity from sender to receiver. This project
is about the design of adaptive routing algorithms, and its implementation in asynchronous
digital hardware. As a proof of concept, we have de ned the task to learn a one-dimensional
mapping from sender to receiver which reconstructs the topology of the feature space from
a scrambled representation communicated by the sender. This could be thought of as a
(1-D) LGN-to-cortical connectivity map learned to preserve retinal topography.
Our approach exploits temporal correlations in the address space to extract spatial
correspondence in the feature space. This approach assumes that stimuli are continuous.
We have investiged a class of algorithms that operate only on the receiver address domain,
and that use spatial and temporal information in various ways. Results have been successful
to reconstruct hidden spatial order for the simple task that we targeted, and continued
work will generalize this to more general tasks with discontinuous and spatially distributed
stimuli, and to higher dimensions.
We have started the hardware design of a prototype adaptive sender-receiver router in
asynchronous digital logic, using a combination of SRAM memory and a microcontroller.
7.2.4 Comparative Study of Reinforcement Learning Algorithms on Khepera
Participants: M. Jabri, G. Cauwenberghs and P. Verschure (See also the Neurobots
Workgroup)
Learning algorithms for sensory-motor integration such as reinforcement learning are
seldom tested in real-world environments such as on autonomous robots. The goal of this
workgroup was to do benchmark comparison of reinforcement-based algorithms on Khepera,
which is a small commercial navigating robot with computer interface and which o ers an
ideal test-bed for such experiments. We have de ned a simple task of learning (from scratch)
to navigate towards a light source target while avoiding obstacles. Khepera contains an
array of 8 IR proximity sensors inputs which also provide 8 light sensor inputs. Two motors
control the navigation in forward, backward, left and right directions.
For state space encoding of the sensory inputs, we experimented with learning vector
quantization and other unsupervised techniques. A straight linear threshold map from
inputs to action outputs seemed most e ective while simple. We then applied predictive
hebbian learning and Barto-Sutton reinforcement learning algorithms to the task of obstacle
avoidance in forward motion. Wall sensors and backward motion detectors implemented
the negative reinforcement signal. Initial experiments indicate the learning behavior is
complex and fails to maintain the task objectives over an extended period of time. Further
experiments at Zurich, Salk and Hopkins will compare other reinforcement-type and classical
conditioning algorithms on Khepera.
17

7.2.5 On-Chip Learning on Walking Robots
Participants: S. Still and G. Cauwenberghs
(See also the Robots Workgroup)
Simple walking robots (such as Tilden's) exhibit complex walking behavior that can be
modulated through control of synaptic inputs to neural oscillators that model the spinal
chord. The control of walking gates through adaptation of conductances and timing pa-
rameters in the motor neurons is the objective of this VLSI design project.
In the initial phase of this project, a VLSI chip is being developed which will integrate
all sensory inputs and neural oscillators as well as a digital interface to a host computer
for external programming and adaptation of the parameters. This chip will be mounted on
a walking robot. In a second phase in the future, we plan to replace this chip by a pin-
compatible chip which integrates local learning of the parameters, for autonomous operation.
At present, we have designed aVLSI circuits functionally equivalent to the discrete circuits
presently implemented on Mark Tilden's walking robot designs. Modular cells for A/D and
D/A conversion are available and will be integrated in the rst chip now being developed.
7.2.6 Adaptive Analog Liquid Crystal SLM Pixel Design
Participant: Xu Shaw Wang
Analog liquid crystal spatial light modulators require local analog storage of the voltage
driving the liquid crystal pixels, and fast write times. VLSI techniques for long-term analog
capacitive storage will be applied in a new pixel design for a digitally programmable, analog
LQ SLM.
This projects was initiated but could not be completed at the workshop because of the
emergency early departure of the participant.
7.3 Neurobots Workgroup
Organizer: Paul Verschure
The working group focused on the exploration of issues of learning and visuo-motor
integration on mobile robots using traditional or neuromorphic technology. To support
these activities 4 setups of the micro-robot Khepera (including CCD and gripper modules)
and 4 larger mobile platforms, Koala, were brought to the workshop (with the support of
the Swiss producer of these robots: K-team, Lausanne). The Koalas were also the main
platform for the activities in the sensory-motor group. The aims of the group were to
provide a hands-on introduction to the use of robots in the study of neural systems and
the evaluation of neuromorphic devices. Participants were also encouraged to develop their
own projects.
In order to achieve these goals several afternoons were allocated to allow participants
to work with Khepera either using the LabVIEW environment or custom C programs. In
addition a number of example programs were provided to start up individual projects. In
total 9 individual projects were pursued in which a total of 14 participants collaborated
individually or in groups.
7.3.1 Learning a task-based representation by error feedback on a mobile plat-
form
Participants: F. Hamker and P. Verschure
18

In this project we wanted to demonstrate the applicability of learning a task-based rep-
resentation by error feedback on a mobile platform using a learning method which adapts
the number of nodes in a network to the task requirements. It was based on earlier modeling
work by Fred. This learning method takes into account that representations found in bio-
logical systems often organize according to the task. The exact learning procedure does not
claim to re ect the properties of learning mechanisms found in the brain. The main compo-
nents
of
this
learning
algorithm
are:
0.35
a clustering of the sensor-input through the error feedback
0.3
generated by the task
0.25
0.2
the learning method can in principle be unsupervised, re-
0.15
inforcement based, or supervised. At this time, however, a
0.1
supervised task was considered.
0.050
500
1000
1500
2000
2500
Figure 4: Error versus
the weights, implementing the receptive elds of the units,
number of nodes
as well as the number of units change as a result of the
combination of sensory input and the feedback generated
by the task.
the representation formed by the developing network stabi-
lizes itself, when the environment is stable, but remains exible, in order to adapt to
changes in the environment. It is therefore capable of \lifelong" learning (no distinc-
tion between learning and recognition or generation of action needs to be made).
The task was to learn to avoid to collide with obstacles. Khepera was put into an
environment and every collision induced the supervised training of an appropriate motor
output. In areas without obstacles the robot was forced to go straight. Figure 4 shows
that with an increasing number of nodes added to the network (up to 2000) the error in
obstacle avoidance steadily decreases. It turns out that the algorithm learns very fast and
stabilizes itself during the task. Further research on this algorithm should concentrate
of a combination with reinforcement learning techniques in order to increase the eld of
applicabilities in many di erent learning tasks in neuromorphic systems.
7.3.2 Evaluation of a biologically realistic learning rule, BCM, in an active
vision system
Participants: P. Neskovic and P. Verschure
The goal of this project was to test the BCM theory on the development of orientation
selectivity using the images from the camera mounted on a freely moving robot. In 1982
Bienenstock, Cooper and Munro (BCM) proposed a synaptic-modi cation hypothesis in
which two regions of modi cation (Hebbian and anti-Hebbian) are stabilized by the addition
of a sliding modi cation threshold. Modi cation of synaptic weights in the BCM theory is
given by
_mi = (c )di
(1)
where cell activity measured relative to spontaneous activity is
c = (~m ~d):
(2)
19

The vector ~m represents the strength of the synapses that connect the input ~d to the
BCM neuron and a sigmoidal function. The constant determines the learning speed,
and the function is
(c ) = c(c )= :
(3)
;
The sliding threshold is de ned as the second moment of the cell activity.
= (
t
Z
dt c e t t =0
;1
0
2
(
;
)
):
(4)
;1
where the time constant determines how rapidly the threshold moves.
The BCM theory has been tested on static images taken from realistic environments
where it produced the development of orientation selectivity. Our idea was to test BCM on
real images from a moving robot in order to better approximate the visual input generated
in a natural environment. The training was done on patches of 13x13 pixels extracted from
the images derived from the camera attached to Khepera robot. The patches obtained
in this way would re ect additional correlation among them in at least two ways. First,
translational motion of the robot would cause consecutive images to be shift-correlated.
Second, motion of the robot to or from the object would cause scale-correlation among
consecutive images. We wanted to investigate the question whether these correlations would
introduce a new kind of statistics in the inputs to the BCM unit which would a ect the
development of its orientation selectivity.
A rst test was performed in an environment consisting of a closed space with highly
variable images on the walls. This did not lead to stable receptive eld (RF) properties.
Therefore, we decided to work with a more structured environment. The robot was enclosed
within the paper wall on which we printed vertical black bars on white background (See
Figure 5). The training took about 10 hours and after almost 300 000 iterations no stable
RF properties emerged (Figure 5).
A: Environment and trajectory
B: Example input images
C: Development of RF
Figure 5: BCM. A: Test environment and an example trajectory of the robot. Starting
position was in the upper right corner. B: snapshots of the input images projected upon
the BCM unit. C: The development of RF properties. Note the lack of invariance over
subsequent samples.
In these experiments we used parameter settings commonly used in the study of BCM
employing random patches of static images. Our results showed, however, that in case
20

of images derived from the camera mounted on a mobile robot no stable RF properties
developed (Figure 5C). To further investigate whether these results are due to parameter
settings or reveal a fundamental problem of the BCM method a collaboration was initiated
between The Institute for Brain and Neural Systems at Brown University and the Institute
of Neuroinformatics ETH-UZ Zurich.
7.3.3 The construction of a model of Lamprey swimming behavior
Participants: A. Cohen, M. Tilden and P. Verschure
During the previous workshop a walking robot was constructed by Tilden which was
interfaced to the microrobot Khepera. Subsequently this device was interfaced to a model
of a set of coupled Central Pattern Generators (CPG) implemented on the silicon neurons
provided by John Elias. It was shown that stable gates could be generated and displayed
by this system. This year the aim was to revisit these issues using a more realistic model of
the Lamprey CPG, and a mechanical device that more close resembles the actual mechanics
of the Lamprey body. In this project several issues were explored.
Can a model of a Lamprey CPG, derived from the properties of the physiology and
anatomy of the spinal cord segments, generate stable oscillations?
What is the e ect of the long lasting positive feedback, through the spinal cord stretch
receptors, on the behavior of a single CPG?
Can a multi segment model be de ned which will induce stable and invariant phase
relationships between the individual segments?
Can a mechanical device, approximating Lamprey movement, Mark Tilden's Snake-
Bot, be reliably controlled in order to be interfaced with the multi segment model?
Can a model consisting of multi segments induce stable translational motion in a
mechanical device?
A model of a single segment of the Lamprey spinal cord
Starting with an earlier model (Jung et al., 1996, J. of Neurophys., 75, 1) a
single segment model was implemented using Xmorph (Verschure, 1997) (Figure 6). As
opposed to this initial model the present implementation used populations of clipped linear
threshold units. A rst evaluation revealed that stable oscillations could be induced in a
relatively large range of parameter settings (Figure 7 A).
After this initial success Avis proposed to investigate the e ect of the positive feedback
through the stretch receptors, local to the spinal cord. In the model the stretch receptors
were driven through the motor cells activated by the CPG (Figure 6). For present purposes
the gain of this positive feedback is de ned by the steady state activity level of the stretch
receptors. The range of positive gains investigated was between 0.0 and 5.0. It was shown
that by including this positive feedback the CPG functions autonomously in a large range
of gain values (Figure 6B and C) where the CPG frequency scales practically linearly with
the gain. These results have interesting implications for the mode in which higher areas can
control these spinal cord CPGs. These are further investigated in a continuing collaboration.
Interfacing a model of CPG dynamics with a mechanical device
21


Figure 6: Simulated circuit of a single segment of the Lamprey spinal cord. Light gray
connections are excitatory, black connections are inhibitory. Total number of units was
515. Total number of synapses was 28010. Populations R, E, L, and C make up the CPG.
Tilden has previously constructed a SnakeBot which provides a system consisting of
three coupled segments each providing 1 degree of freedom. Using a bit sequencer and the
Khepera interface developed in last year's project on the KhepTraveler the possibilities of
controlling this device were mapped out. It was shown that translational motion could be
induced by driving the di erent segments of the SnakeBot with CPGs with particular phase
relationships. Unfortunately Mark had to leave from the workshop early so that the next
step of the project, the interface to the CPG model implemented in Xmorph could not be
completed.
As an alternative a system was constructed consisting of several coupled microrobots, each
representing one segment of the Lamprey. It was shown that through interfacing this
setup with a model consisting of multiple segments, of the type shown in Figure 6, stable
translational motion could be induced. Note that in this experiment the motor commands
sent to the individual robots were rotational, only trough their interaction could translation
motion of the whole arise. This experiment showed that appropriate inter-segment coupling
can be sustained through solely inhibitory couplings between the segments without a ecting
the intra segment coupling (Figure 8). Also this model is further explored in a continuing
collaboration.
7.3.4 A model of locust avoidance behavior using an aVLSI retina mounted
on a micro robot
Participants: M. Blanchard, M. Loose, D. Lawrence, and P. Verschure
The aim of this project was to implement a one-dimensional collision avoidance sensor
for a mobile robot using a retina provided by Loose and a collision avoidance algorithm
designed by Blanchard. The algorithm is based on a model of the locust LGMD neuron
which responds to objects approaching the animal on a collision course. Our motivation
was to study whether LGMD-like responses would be su cient to detect obstacles in the
path of the robot. The project was part of the Learning in Neurobots workshop and the
22

A: Stable oscillation
B: Low positive feedback (0.25)
C: Medium positive feedback (1.45)
[lampContraS100[0-1000]] Time
[mediumGain] Time
1.00
[lowGain] Time
RL
1.00
1.42
RL
RL
RR
RR
RR
EL
EL
EL
0.80
ER
0.80
1.14
ER
ER
LL
LL
LL
LR
LR
CL
LR
CL
0.60
CR
0.60
CL
0.85
CR
CR
SL
SL
SR
0.40
SR
0.40
0.57
0.20
0.20
0.28
-0.0000
-0.0000
0
200
400
600
800
1000
0
200
400
600
800
1000
-0.0000 0
200
400
600
800
1000
Time (mSec)
Time (mSec)
Time (mSec)
Figure 7: A : Stable oscillations (at 14 Hz) in a model of a spinal cord segment. Channels
re ecting the activity of similar populations placed at the left or right side of the segment
are displayed immediately following each other. Channels re ecting average activity in
the populations at the left side of the segment are displayed in red, right side is blue. B:
Autonomous activity of the CPG through the positive feedback received from the stretch
receptors at a low gain. CPG frequency is 20 Hz.C: Positive feedback at a medium gain.
CPG frequency is 45 Hz.
A: Cross-correlation of two opposing motor cells within one B: Crosscorrelation of two aligned motor cells between two
segment
segments
2seg3: Segment1-?-34-Act - Segment1-?-54-Act [0-1000]
2seg3: Segment1-?-34-Act - Segment2-?-34-Act [0-1000]
26
437
30
439
50
50
30
30
54
53
10
10
Lag
Lag
-10
-10
1
1
-30
-30
-50
-50
0
100
200
300
400
500
600
700
800
900
1000
0
100
200
300
400
500
600
700
800
900
1000
Time (mSec)
Time (mSec)
Figure 8: Phase relationships within (A) and between (B) segments. The within segment
components are 180o out of phase, the between segments elements show a phase relationship
of 135o
Motion discussion group.
The retina comprised a square array of 64*64 adaptive silicon photoreceptors of the
type developed by Tobi Delbruck. The chip was interfaced via a parallel port allowing
rapid transmission of the data to the host PC. A Linux driver for the parallel port was
written during the workshop by Lawrence and Loose. A characteristic of the retina that was
addressed in software was the variability in the analog o sets between pixels.A normalizing
matrix was calculated using estimates of the o set in each pixel and applied to each `frame'
read from the retina in order to reduce the error.
We calculated the size of a moving object by using a model of the early visual processing
of the locust combined with non-biological computation.The input from the retina was
spatially and temporally integrated (using lateral and self-inhibition) to reduce redundancy
and the edges extracted by thresholding.The size of the object was calculated by nding
the positions of the edges within the resulting matrix.
Figures 9 A, B, and C show the responses of various parts of the system to a stimu-
lus (a pad of yellow post-it notes against a cluttered background) approaching the retina
23

A: Photoreceptor response
B: Monopolar cell response
C: medulla ON/O response
Photoreceptor responses
LMC responses
ON/OFF cell responses
35
35
35
1
0.2
0.9
30
0
30
30
0.6
0.8
−0.2
25
25
25
−0.4
0.7
−0.6
0.6
20
20
20
0.5
−0.8
0.5
Time
Time
Time
15
15
15
−1
0.4
−1.2
10
10
0.3
10
−1.4
0.2
5
0.4
5
−1.6
5
0.1
−1.8
0
0
0
0
70
60
50
40
30
20
10
0
70
60
50
40
30
20
10
0
70
60
50
40
30
20
10
0
Position
Position
Position
Figure 9: A: Responses of the photoreceptors to the approach of an object. B: Simulated
responses of large monopolar cells (LMCs) to the approach of the stimulus.The background
activity of the units is zero, showing that redundant information has been removed. C:
Simulated responses of medulla ON/OFF cells to the stimulus, showing that the edges of
the object have been detected successfully.
at a steady velocity (moved by hand). The edges can be seen in the graded responses
of both the photoreceptor outputs (Figure 9 A) and the simulated LMC (large monopo-
lar cell) responses (Figure 9 B). After thresholding the edges are strongly de ned in the
simulated medulla ON/OFF cell responses (Figure 9 C). The calculated size of the object
increases as the object approaches the eye (Figure 10). The units of size are the number of
photoreceptors covered by the object. The system produced similar results for objects of
di erent contrasts and for objects moving at di erent velocities and in di erent directions.
Object size
During this project two major issues emerged.
70
60
1. The use of mixed neuromorphic and digital systems is prob-
50
40
lematic. In the case of this project, the photoreceptors
Size
30
within the retina adapted signi cantly between read cycles
20
due to the low `frame rate' of the system, resulting in a loss
10
of data. For a digital model, a CCD camera would be a bet-
00
5
10
15
20
25
30
35
Time
ter input device with adaptation handled in software. This
Figure 10: The calcu-
would allow greater control over the speed of adaptation.
lated size of approach-
To implement this system using a mixed approach, all pro-
ing objects
cessing up to the edge detection circuit should be combined
on-chip.
2. Interfacing a movement sensitive visual system onto a mo-
bile robot is a non-trivial task. When the robot turns it's own motion produces
signi cant input over the whole visual eld. This suggests that the visual system
should be used to trigger a stereotyped behavior during which input to the motor
system from the visual system should be inhibited. With the locust LGMD neuron,
there is a feedforward inhibitory connection which performs this task, making this
neuron a good model for triggering collision avoidance behavior.
This project indicated that a model of the LGMD neuron could be used to equip a
vehicle with a visual collision avoidance sensor. The ideal implementation is an analog VLSI
24

integrated circuit using focal-plane processing and producing a single (analog or digital)
output. A compromise approach would be to use an IC performing edge detection and a
digital computer to calculate LGMD response.
7.3.5 A comprehensive comparison of di erent Reinforcement Learning meth-
ods using real-world systems
Participants: M. Jabri, G. Cauwenberghs and P. Verschure
During the last workshop the importance was discussed of comparing di erent learn-
ing methods, which are applicable to aVLSI implementation, in the context of real-world
tasks. The relevance of this comparison is that even-though many methods for learning are
available their properties in real-world tasks are not well understood. In collaboration with
the group \aVLSI Learning Systems" a beginning was therefore made in developing such
a comparison. Choices were made on the di erent algorithms to compare. An initial set
included: predictive Hebbian learning, Q-learning, and DAC. The microrobot Khepera was
chosen as the platform on which the comparison would be made. It was decided that the
task should consist of several levels of complexity:
1. An empty secluded space were the task would be to learn to avoid the walls.
2. The inclusion of additional objects randomly distributed in the environment, leading
to exceptions.
3. Inclusion of a target (light source) which needs to be found, leading to con icts
During the workshop Jabri and Cauwenberghs implemented predictive Hebbian learning
method. In addition Cauwenberghs and Verschure sampled data from the sensors in order
to estimate the minimum representation required to map sensory events onto motor ac-
tions. Verschure started a further implementation of DAC. This project is presently further
pursued.
7.3.6 A control model for a microrobot for surface recognition
Participants: S. Lazzari and P. Verschure
Lazzari wanted to implement a control model for Khepera which would distinguish
surfaces only using the infra red sensors on Khepera. This was mainly intended as a small
project in order to get familiar with the Khepera environment and the LabVIEW graphical
programming environment. This project resulted in a simple control model which would
generate di erential responses dependent on the re ectant properties of the encountered
surfaces.
7.3.7 Using aVLSI cochleas on a mobile robot
Participants: A. Andreou, D. Klein, and P. Verschure
The aim of this project was to try and interface an aVLSI cochlea to the mobile robot
Koala and construct a hybrid system which could be used in the modeling of classical
conditioning. Klein put together the board which would contain the chip and would provide
the output for the digital and analog input ports on Koala. This projectwas not completed
nished in Telluride but is being pursued. The main problems encountered were of a
practical-technical nature.
25

7.3.8 Development of a hardware model of the rat's sensorium
Participants: M. Tilden and P. Verschure
After a successful collaboration during last year's workshop we developed further ideas
on how to construct a mechanical model which would approximate the sensory ability of
rats. The motivation behind this project was that a large amount of behavioral and neural
data is available on this species. In order to more closely relate theoretical studies to these
data sources appropriate technology needs to be developed. In these discussion it was
decided that a rst point of departure could be the construction of a whisker system. Prior
to the workshop Mark constructed prototypes of these sensors which we evaluated during
the present workshop. Due to time limitations this project could not be explored in depth.
Further work on this project is continuing.
7.3.9 Conclusions
The concept of the Neurobots working group seems to be very successful at the workshop.
The working group has made concrete contributions to the development and exploration of
several issues such as: interface standards, the development of hybrid digital-analog systems,
and the evaluation in real-world applications of neuromorphic devices and models. For next
year the working group proposes to start a more thematic approach by de ning projects
well in advance of the workshop. Exemplar projects could deal with issues of multi sensor
fusion, i.e. integration of multiple visual sensors dealing with subaspects of navigation
(edge detection, optical ow, expansion, 2D retina), interfacing both visual and auditory
neuromorphic devices onto a mobile platform, or setting the goal of achieving autonomous
navigation fully relying on neuromorphic sensors.
7.4 Sensory-motor Workgroup
Organizers: Giacomo Indiveri and Ralph Etienne-Cummings
At the Telluride 1996 Workshop on Neuromorphic Engineering, a new trend emerged
to interface neuromorphic chips, such as one and two dimensional silicon retinas and mo-
tion detection chips, to robots. During that workshop, we saw the rst few attempts to
out t robots with biologically inspired vision input instead of images from CCD cameras.
In addition, these robot/retina systems were used to perform tasks such as line-following
navigation based on line position as well as slip motion. These rst attempts spawned a
year of research on neuromorphic motor-sensory systems 2], which resulted in the systems
constructed at this year's workshop.
Last year only few researchers worked on the integration of the latest retina or motion
chips they were personally developing on to mobile platforms. This year we brought inte-
grated sensory-motor systems, developed in advance speci cally for this workshop. We were
thus able to involve more researchers in the workshop and help them use existing vision
chips integrated on mobile platforms. Speci cally the number of participants that were
actively involved in the di erent projects described below was twelve. The workshop served
as a forum to share experiences and to learn from each other's approaches. In particular,
the motor platform (the robots or active vision systems) and the integration of neuromor-
phic chips were the areas of most diversity. Excluding the walkers with low level electronic
controls, which served for research in motor control paradigms, the solutions reached from
the commercial grade Khepera and Koala robots (see Fig. 11) with complete motor, control
and sensor packages, to user constructed solutions based on micro-controllers and generic
26


RC (Radio Controlled) cars as mobile platforms. Contrasting to the high costs of the com-
mercial systems the RC car/ C solution is very inexpensive, but requires more construction
on the part of the researcher to be successful.
Figure 11: Koala robots used in the workshop. From left to right, a Koala with motion
sensors, a Koala with two versions of silicon-retinas and a Koala with a standard CCD
camera.
Projects, using a RC/ C car approach and a neuromorphic pan-tilt tracker, exposed
the researchers to a small single chip 8-bit 20MHz micro-controller (Micro-chip PIC16CX
family), with on-chip A/D, PWM, and RS232 port, which can be applied to many neuro-
morphic/digital hybrid systems. These systems interface a neuromorphic chip to the digital
components of the system either through the PIC's ADC or directly through parallel digi-
tal I/O depending on the chip's output format. AER and RS232 protocols have been used
to control the system's actuator components. Projects using Koala platforms (which have
a Motorola 68C311 micro-controller and 1Meg. of RAM) on the other hand exposed the
researchers to a higher level programming environment (namely C or C++ language) and
allowed them to implement more complex types of control algorithms.
The overall increase of interest towards robotic applications that use neuromorphic
sensors and the increase of the number of participants that each year actively take part in the
sensory-motor workshops evidences the progress that the neuromorphic research community
has made in moving towards system integration of neuromorphic devices. Researchers are
now starting to tackle the problem of building complete \neuromorphic" systems both
using hybrid approaches (see the projects described below) and multi-chip communication
tools (see the projects involving the Address Event Representation protocol). The projects
carried out in this workshop have given a signi cant contribution to the neuromorphic
research community: many of the results obtained in these 3 weeks of work will be used in
the course of the following year to further develop these projects.
7.4.1 Line Tracking Using a Low-Cost RC Car and a PIC16C74A C
Participants: A. Moini, R. Etienne-Cummings
One project involved the interfacing of an abstract model of the superior colliculus
with the RC/ C system for fast line following. The model of the superior colliculus was
implemented on a 2 m NWELL tiny chip. The chip provides the analog coordinates of
a moving target. Hence, the chip performs photo-transduction, edge detection, motion
detection and 2D centroid computation. For the line following task, coordinates of the line
27


are presented to the micro-controller, which performs A/D conversion and provides a PWM
signal directly to the steering servo of the RC car (see Fig. 12). The x-coordinate of the
centroid is used for steering. For speed control, both coordinates can be used to determine
the location and orientation of the line relative to the RC. Should there be a sharp curve
in the line, the car can be slowed and subsequently accelerated on straight segments of the
line. Moini designed a camera for this system, learned PIC16C74A ( C) programming and
generated the control signals for the RC.
Figure 12: RC car interfaced to a neuromorphic vision chip.
7.4.2 Neuromorphic Tracker Using a Pan-Tilt System and a PIC16C74A C
Participants: M. Lades, R. Etienne-Cummings
Our neuromorphic tracker is a component within a pilot platform which is aimed at
research in advanced recognition and motion analysis. It has applications on robot platforms
and in the automated security and surveillance sector. The system was conceived in a
collaboration between SIU and LLNL and implemented at the Telluride workshop.
Our two chip tracker solution consists of a foveated silicon retina tracker chip, a micro-
controller (PIC16C74A) and a pan-tilt system (see Fig. 13). The control loop of the system
maps the coordinates of moving objects in two resolution stages from the retina chip through
the A/D of the micro-controller into motor commands sent to the pan-tilt through the
controller's serial port.
Initially the system redirects its view through saccadic updates towards a detected
moving object. It then switches to smooth pursuit once it captures the object within its
fovea region. If the object is lost the system returns from smooth pursuit to the saccadic
motion detection mode. The pan-tilt system also carries a 3-axis computer controlled lens
system and a CCD camera or high-resolution neuromorphic silicon retina which will be used
in further research directed towards the analysis and recognition of the tracked objects.
7.4.3 Line Tracking Using Position and Motion Measurements on Koala
Participants: T. Horiuchi, G. Indiveri
We interfaced the analog VLSI-based visual tracking chip designed by Tim Horiuchi
to the Koala mobile robot. This chip was con gured to provide a single analog voltage
28


Figure 13: Pan-tilt tracker system using neuromorphic vision chip
which combined both position and direction-of-motion information to guide the robot. We
then used a simple algorithm that uses the edge position measured by the chip to drive the
motors (e.g. if edge to the left turn left, else if edge to the right turn right, else go straight).
Koala was able to track a pink power cable laid on the oor (blue carpet) reliably.
Given the simplicity of the algorithm and in spite of the system having been hastily set
up, its ability to track edges demonstrated the high degree of potential that neuromorphic
sensors interfaced with mobile platforms have.
7.4.4 Optical Flow Algorithm for Controlling Koala
Participants: J. Kramer, T. Stanford, S. Liu, G. Indiveri, T. Neumann, P.
Verschure
This project was inspired by experiments performed on bees, which have been shown to
use motion cues to keep centered when ying through narrow passages.
In order to perform a similar experiment on a Koala robot driving between two walls,
we mounted two FS velocity sensors 3] onto its front part, facing towards the left and right,
respectively. The idea was to keep the robot centered between the walls by matching the
optical ow seen by the two sensors. The mismatch in the two voltage readings for left and
right forward motion (corresponding to the logarithm of the velocity ratio) was used as an
error term that was directly converted into a motor command to modulate the speed of
the left and right sets of wheels. The robot was thus purely controlled by two optical ow
measurements without the use of any further sensors.
In order to test the system, we built a round track consisting of an inner and an outer
paper wall with vertical black and white stripes printed on them (see Fig. 14).
29


Figure 14: Optical ow experiment setup: Koala, equipped with motion-sensing circuits was
placed in a corridor made with concentric circles delimited by high contrast bar stimuli.
The trajectory of the robot was recorded for several laps in both directions using a
camera suspended from the ceiling (see Fig. 15). The trajectory was usually closer to the
outer wall of the track. This was due to the large curvature and the fact that it was not
possible to get a fast and reliable control feedback loop, due to the wiggling of the robot
after each correction of the motor speeds. Geometrical e ects also limited the robustness
of this simple algorithm. Depending on the curvature of the track, the angle of the sensors
with respect to the robot and the angle of the robot with respect to the walls, the velocity
readings would be distorted or both sensors would even look at the same wall. For tracks
with lower curvatures the performance was signi cantly better.
The robot was also tested in the hallway of the schoolhouse, where it was able to
avoid the walls in (less than about a meter wide) passages under rather dim incandescent
illumination without the addition of high contrast stimuli. However, it would not work
reliably under arti cial illumination in the classrooms, because the large icker of the lamps
would corrupt the velocity measurements.
Titus suggested to use a genetic algorithm he developed for simulating y behavior
to make the robot learn the best motor correction parameters for a given environment.
Arrangements have been made to further investigate this suggestion, even after the end of
the workshop, by starting a collaboration between the MPI Biol. Cybernetics (Tubingen)
and the Institute of Neuroinformatics (Zurich).
7.4.5 Silicon Retina to Koala interface using the AER protocol
Participants: T. Delbruck, S. Liu, G. Indiveri
In this project we interfaced a 1D silicon retina to Koala using the Address Event
Representation protocol. We rst built a board for the silicon retina (used also in the
AER workshop) in order to mount it on the Koala platform. Most of the time was spent
on building the board and nding the right bias values for the retina to work. Once that
30


Figure 15: Trajectories of Koala controlled by an optical ow algorithm recorded by a
ceiling mounted camera and a tracking algorithm. Koala was tracked as it traveled multiple
loops in the corridor. Light dots evidence Koala's trajectory. Neighboring dots represent
multiple passes of Koala in nearby positions.
problem was solved, we wrote a program for the Koala microprocessor that would implement
the handshaking communication protocol. This piece of code will also be used in future
(by many of the workshop participants) for connecting AER devices to Koalas or to PCs.
We then wrote a routine that would monitor (on a PC connected to Koala through a serial
cable) the addresses being sent/received. This allowed us to plot in real-time the output
of the retina on the PC monitor (using Matlab) and thus tune the retina and the Koala
motor-control code to perform di erent types of tasks (e.g. tracking, obstacle avoidance,
etc.)
7.4.6 Line Tracking and Obstacle Avoidance using Koalas
Participants: T. Morse, G. Indiveri
We used the multitasking environment provided with the Koala micro-controller cross-
compiler to implement a control algorithm that would allow Koala to simultaneously track
edges and avoid obstacles. The line-tracking task was carried out using a 1D silicon retina
and software code previously developed in preparation for the workshop. The obstacle
avoidance task was implemented using the readings from 14 infrared sensors mounted on
Koala and software based on an implementation of a Braitenbherg vehicle 3. Both soft-
ware algorithms were modi ed and integrated to obtain a coherent control algorithm that
would arbitrate between the 2 independent tasks. The line tracking code contains a search
strategy that is selected when the line to be tracked is not found. If the avoidance algo-
rithm causes Koala to steer away from the line, the new retinal input state (loss of the line)
switches the program in its search mode. We tried to implement a "memory" that would
record in
and
variables the intention to steer back towards the line
left
bias
right
bias
when obstacles caused Koala to steer away from it. Once the line was detected again, the
31

variables were reset.
left
bias,
right
bias
Using the control algorithm above described we obtained the following qualitative be-
havior: the Koala robot will follow a line (we used an extension cord layed on a carpet
oor). If an obstacle presents itself in the path of Koala, it will avoid the object at the cost
of loosing track of the line. The Koala will then try to recover the line, eventually switching
its control algorithm into a search mode (implemented using a random number generator)
until it picks up a line to track again. In the search mode Koala will also continue to avoid
obstacles.
7.5 Interchip Communication for Multichip Neuromorphic Systems Work-
group
Organizer: Kwabena Boahen
Inspired by the success of Time-Division{Multiple-Access (TDMA) communication pro-
tocols, neuromorphic engineers are using adaptations of these well-studied protocols to build
multichip neuromorphic systems. This new generation of neuromorphic systems promises
larger numbers of neurons and higher degrees of connectivity than is presently possible on
a single chip. To build these large-scale systems, however, neuromorphic architects must
address a big di erence between the requirements of computer networks and those of neu-
romorphic systems: Whereas computer networks connect thousands of computers at the
building- or campus-level, neuromorphic systems need to connect thousands of neurons at
the chip- or circuit-board level. Hence, we must improve the e ciency of traditional com-
puter communication protocols and architectures by several orders of magnitude in order
to make them viable for neuromorphic systems.
Our goals for this workshop were to expose people to existing solutions for communi-
cation between neuromorphic chips and boards to teach them how to design such analog-
digital hybrid systems (mixed-mode) to give them the opportunity to interface these sys-
tems to robotic or computational platforms of their choice to adopt common standards|
and develop the required interfaces|for interchip communication and to discuss the short-
comings of the existing protocols, and explore viable modi cations and extensions.
7.5.1 Background
In their initial e orts, neuromorphic architects designed interchip communication networks
that provided simple unidirectional, point-to-point connectivity between arrays of neu-
romimes on two neuromorphic chips. They sent spikes from neuromines in the sending
chip to neuromimes at the corresponding location in the receiving chip using a random-
access scheme. In this scheme, an address-encoder generates a unique binary address for
each neuron. These addresses are transmitted over a shared bus to the receiving chip, where
an address decoder selects the corresponding location. Two versions of this random-access
scheme have been proposed.
In the hardwired version of the random-access scheme, neurons have direct access to
the input lines of the address-encoder, and activate the encoder as soon as they spike. This
scheme has the virtue of simplicity|which al ows high-speed operation|but the addresses
are garbled when two or more neurons re simultaneously. For random (Poisson) ring
times, these collisions increase exponentially as the spiking activity increases the collision
rates are even more prohibitive when neurons re in synchrony.
In the arbitered version of the random-access scheme, an arbiter is interspersed between
32

the neurons and the address-encoder. The arbiter ensures that only one neuron gains
access at a time, and the other neurons wait until they are selected by the arbiter. This
scheme has the virtue of preserving the integrity of the addresses|which allows reliable
communication|but spikes are delayed when the channel is occupied. For random (Poisson)
ring rates, the queuing time is inversely proportional to the rate at which empty slots occur.
Thus, the queuing time decreases as technological improvements reduce the cycle time|
even when channel utilization remains the same. And for synchronous bursts, the delay is
proportional to the activity level.
The upshot is that when spike timing is random and high error rates can be tolerated,
the hard-wired version provides the highest throughput. On the other hand, when spikes
occur in synchrony and low error rates are desired, the arbitered version provides the highest
throughput.
7.5.2 Format
Members of the interchip communication group participated in lectures, lab projects, and
discussions over a two week period. We devoted the rst week almost entirely to the
lecture series, to bring people up to speed. In the second week, we undertook discussions,
laboratory demonstrations, and cross-disciplinary projects. About 10 people|give or take
a few|attended the lectures and discussions. Six of the 10 project proposals where actively
pursued, with 3 to 5 people in each project. Good progress was made in 4 projects, but
the other 2 got o to a slow start due to technical di culties, lack of resources, and lack of
time.
7.5.3 Teaching
We conducted a series of four afternoon lectures that covered the basics of neuromorphic
interchip communication:
1. Overview: The Address-Event Representation
2. Handshaking and Pipelining
3. Arbitration
4. System Level Issues
Students learned a formal (correct-by-construction) methodology for designing asyn-
chronous digital systems developed by Alain Martin at CalTech, and they worked through
several design examples. They expressed a lot of interest in tools for automating the design
process, but unfortunately such tools where not available at the workshop. Given the latent
interest, we plan to invite someone from Alain Martin's group to install and demonstrate
these tools at future workshops.
The design of multichip large-scale neuromorphic systems was also presented in a morn-
ing seminar. The speaker showed how to extend the communication system to handle
divergent and convergent connections to or from several neurons on several chips using
single-sender/single-receiver links a discussion session followed up on this issue.
33

7.5.4 Chip-Design Labs
We conducted a series of three evening chip design labs on basic layout, compilation, and
veri cation techniques for neuromorphic communication interfaces:
1. One-dimensional senders and receivers
2. Two-dimensional senders
3. Two-dimensional receivers
We made chip layouts and schematics available to the students so they could familiarize
themselves with these designs at their own convenience.
These design labs where only marginally successful, due to the di culty of interacting
with a large group over a single 17-inch monitor. To make the design lab more instructive,
we plan to have students do some layout themselves and discuss their e orts with the
instructor, instead of cursorily looking over the instructor's nished designs.
Another shortcoming of this aspect of the workshop was the lack of a silicon compiler.
Several students expressed interest in learning how to use this tool, and, in particular, would
have liked to familiarize themselves with Tanner Research's User Programmable Interface
(UPI) for L-Edit. We will make arrangements with Tanner Research to make this tool
available at future workshops.
7.5.5 Discussions
We met together for three evening discussion sessions. In the rst meeting people tabled
the issues they were concerned about and we discussed these brie y. We identi ed the key
issues in this rst meeting and dealt with these in the two subsequent meetings.
A broad range of issues where raised in the rst meeting, including:
1. Random-Access versus Polling.
2. Arbitration versus Collision Detection
3. Multichip-Modules and Packaging Techniques
4. Packet Switching versus Circuit Switching
5. Spikes versus Analog Encoding
6. Trade-o s in Spike Timing and Latency
7. Self-Timed, Free-Running, or Clocked Communication
8. Relevance of Computer Network Protocols (7-Layer OSI)
In the second meeting, we discussed the various existing interchip communication pro-
tocols, primarily the single-sender/single-receiver link (Point-to-Point or P2P) and the
multiple-sender/multiple-receiver bus (Local Address-Event Bus or LAEB). We agreed on
standard speci cations for both the logical and physical attributes of these communication
channels, which are given in the table below. Rather than merge these approaches into
one monster channel, we agreed to develop interfaces between them. Adrian Whatley, from
Rodney Douglas' group, provided a design for an interface between P2P and LAEB.
34

Point to point (P2P)
SCX Local Address-Event
Bus (LAEB)
Data
Bit parallel - arbitrary no. of
bits
Bit parallel - 16 bits
Handshake 4 phase handshake
4 phase bus arbitration +
data strobe
Polarity
Active high Req and Ack
Active low Req, Ack and
Strobe
Voltages
Low = 0V, high = 5V
Low = 0V, high = 5V
Timing speci ed - see
Timing
Asynchronous
http://www.ini.unizh.ch/
~amw/scx/aeprotocol.html
Impedance unspeci ed -
Impedance 250 Ohms
should expect to interface to
TTL logic
50 pin male IDC as per 26 pin male IDC as per
Connector National Instruments Lab- http://www.ini.unizh.ch/
NB/PC board
~amw/scx/daughter.html#26-
wayIDC
Web pages:
http://www.pcmp.caltech.edu/aer
http://www.ini.unizh.ch/~amw/scx/scx.html
In the third meeting, we discussed the directions in which the communication infras-
tructure will grow and evolve in the future. We identi ed the need to accommodate people
modeling at di erent levels of abstraction.
At one extreme is the silicon cortex project, which studies small networks of complex
neuron models with detailed circuit models of ion channels and dendritic compartments. At
the other extreme is the 2D stereo-vision project which explores large networks of simple
neuron models with single-transistor channel models and no dendritic compartments. With
the small number of complex neuromimes that can t on a chip (typically less than 100),
we can service a board full of these chips with a single bus. Whereas the large number of
simple neuromimes that can t on a chip (typically a few thousand) can easily consume the
bandwidth of a single bus.
Thus, although both projects are working on interchip communication, they are working
at di erent scales: one e ort seeks to achieve at the board level what the other e ort seeks
to achieve at the chip level. Hence, the overhead in power dissipation and real-estate that
the two approaches can a ord di ers by an order of magnitude or more. We believe that
it is important to continue to explore and develop communication infrastructure suited to
these di erent scales, since a complementary set of questions are addressed by studying
isolated cortical microcircuits, as might be found in a single column, or studying mosaics
of these circuits, as might be found in a hypercolumn.
35

7.5.6 Projects
Judging from the number of projects proposed, Interchip Communication was one of the
most popular activities at this workshop. This conclusion was con rmed by an informal
poll of peoples' interests|Interchip Communication tied for rst place with Computerless
Robots. Interchip communication was popular because it provides the nexus to accomplish
a speci c task by putting together several pieces. Given the workshop's collaborative spirit,
there was huge demand for this service. Unfortunately, the demand overwhelmed the few
interchip communication experts around|who where themselves actively involved in other
groups!
Nevertheless, we embarked on several collaborative and multi-disciplinary interchip-
communication projects. These projects fell into two broad groups: Applications and De-
sign. The applications projects generally involved interfacing a sensor that had address-
event outputs|such as a retina or a cochlear|to a robot or a computer. People pursued
these projects if they wanted to use existing chips for a speci c task or they wanted to
characterize these chips. The design projects generally involved adding new capabilities to
the existing interchip communication infrastructure|either by building some hardware us-
ing o -the-shelve components or by programming a microcontroller. People pursued these
projects if the existing solutions did not meet their needs or they wanted to try out inno-
vative ideas.
At the beginning, the group proposed the following applications projects and assigned
experts to lead these e orts:
1. Computer-AER Interface (T Delbruck, A. Andreou)
2. Neuronal Networks on Silicon Cortex (A. Whatley)
3. On/O Retina to Silicon Cortex (J. Kramer/A. Whatley)
4. Retina Boards to Receiver Chips (K. Boahen, A. Andreou)
5. 1D Retina to 1D Receiver (T. Horiuchi)
6. Cochlea to 1D Receiver (A. Andreou and D. Klein).
We also proposed the following design projects, led by chosen experts:
7. 1D Retina to Koala Robot (G. Indiveri)
8. 1D Cochlea to Koala Robot (A. Andreou)
9. Router Design (G. Patel, K. Boahen)
10. Self-Organizing Connectivity Map (K. Boahen, G. Cauwenberghs)
Thus, we had a total of 10 project proposals. Although not all these projects could be
carried through, they are listed here to show the broad range of applications for interchip
communication. Naturally, the scope narrowed down according to the availability of time,
resources, and expertise as people sorted out their priorities. In the end, some progress was
made on six of these projects, with signi cant achievements in four of them.
36

7.5.7 Adaptive Adress-Event Router and Retinal-Cortical Maps
Participants: D. Klein, T. Stanford, G. Cauwenberghs, K. Boahen, A. Andreou
and T. Hinck
(See also the Analog VLSI Learning Systems workgroup)
Address-event representations are an e ective means to communicate neural activity and
implement large-scale connectivity, over a digital bus. What has been lacking is a way to
use the communicated addresses to learn connectivity from sender to receiver. This project
is about the design of adaptive routing algorithms, and its implementation in asynchronous
digital hardware. As a proof of concept, we have de ned the task to learn a one-dimensional
mapping from sender to receiver which reconstructs the topology of the feature space from
a scrambled representation communicated by the sender. This could be thought of as a
(1-D) LGN-to-cortical connectivity map learned to preserve retinal topography.
Our approach exploits temporal correlations in the address space to extract spatial
correspondence in the feature space. This approach assumes that stimuli are continuous.
We have investiged a class of algorithms that operate only on the receiver address domain,
and that use spatial and temporal information in various ways. Results have been successful
to reconstruct hidden spatial order for the simple task that we targeted, and continued
work will generalize this to more general tasks with discontinuous and spatially distributed
stimuli, and to higher dimensions.
We have started the hardware design of a prototype adaptive sender-receiver router in
asynchronous digital logic, using a combination of SRAM memory and a microcontroller.
7.5.8 1D Retina to 1D Receiver and to Koala Robot
Participants: G. Indiveri, S. Liu, T. Delbruck and T. Horiuchi
See Section 7.4.5
7.5.9 On/O Retina to Silicon Cortex
Participants: A. Whatley, J. Kramer, and R. Douglas
The Telluride 1997 workshop was the rst occasion on which external sensory input was
processed on the SCX. The workshop provided the opportunity (a) to complete the testing
of the hardware interface developed at the Institute of Neuroinformatics (INI) in Zurich,
and to demonstrate reliable communication to the SCX from a silicon retina (b) to develop
a program to specify the receptive elds of silicon neurons in the SCX and (c) to set up
and demonstrate orientation selective neurons receiving their input from the silicon retina.
The silicon retina which we used was a 32 x 32 pixel on/o address-event (AE) generating
retina provided by Kramer. This was a modi cation of an earlier retina developed by
another workshop participant, Liu. In this retina, each pixel can produce two distinct AE
signals, an on signal for positive going light intensity changes at the pixel, and an o signal
for negative going intensity changes. These outputs correspond roughly to the outputs of
the bipolar cells in a mammalian retina.
The hardware interface between retina and silicon cortex was developed by Whatley
and Kramer at INI, but had not been tested in its nal form before it was brought to
Telluride. It is an example of and prototype for the Point-to-Point (P2P) sender to Local
Address-Event Bus (LAEB) interface referred to in section 7.5.5 above. At the workshop,
the interface was shown to be reliable, running for many hours, or a few million events,
without failure. Presentation of a moving, broad striped stimulus to the retina produced an
37

AE stream from which data an image of the stimulus could easily be reconstructed (using
an HP 1660 series logic analyzer).
Orientation selectivity was selected as the rst type of computation to be performed on
the AE stream from the retina by the SCX. For this purpose we needed to specify receptive
elds of neurons in the SCX taking the form of a line of pixels across the retina. In the
present version of the retina, the on and o responses from the pixels perform unequally at
the same bias settings, so the retina was tuned for the best performance of the on response,
and only the on signals appeared in the receptive elds of the neurons. A future on/o
retina that can be tuned such that both on and o responses perform well at the same bias
settings might be used to provide input to neurons with on-centre, o -surround receptive
elds.
The SCX requires that the connectivity between neuromimes that it is required to
implement be supplied in terms of projective rather than receptive elds. (This is because
of the forward-directed nature of the table lookup operations it must perform in mapping
from source to destination AEs.) Therefore a program was written by Whatley during the
workshop to generate the appropriate projective eld from a speci cation of the desired
receptive eld for a neuron. This program takes as inputs the dimensions of the retina in
pixels, the base address and bit shifts of the x and y elds of the retinal addresses in the
AE address space, the address of a synapse on a neuron (all inputs are assumed to have the
same weight, therefore the same physical synapse can be re-used for all inputs), an angle in
the visual eld in degrees, and the length of the receptive eld in pixels. It emits a series of
connection commands suitable for direct input to the SCX host software that specify the
required projective eld.
Using the above software, a series of command les were generated that speci ed re-
ceptive eld orientations at 0, 30, 45, 60, 90 and 135. When one of these command les
was used to establish the orientation selectivity of a neuron, the increased ring rate of the
(integrate and re) neuron concerned could clearly be distinguished when the broad striped
stimulus was moved across the visual eld of the silicon retina with the stripes oriented in
the preferred orientation, as compared with the ring rate observed when the same stimulus
was moving with the stripes oriented orthogonally to the preferred orientation.
This project will be pursued back at INI in Zurich, where more time and a more con-
trollable stimulus are available. (All stimulus presentations used during the workshop were
manually driven.) Quantitative data will then be obtained. The project will be used as the
basis for further work in which silicon retinae are used to provide input for silicon cortical
computation.
7.5.10 Computation- and Memory-Based Projective Field Processors
Participants: K. Boahen, A. Andreou, T. Hinck, J. Kramer, A. Whatley
We investigated two ways to extend the point-to-point address-event communication
channel to include divergent (point-to-many) and convergent (many-to-point) connections
at this workshop. The rst approach uses integer arithmetic to compute the destination
addresses. This computation-intensive approach is tailored towards arrays with translation-
invariant receptive elds, where corresponding projective elds may be computed simply
by adding the same set of o sets to each incoming address-event. The second approach
uses look-up tables to obtain the destination addresses. This memory-intensive approach is
tailored towards arrays with space-variant receptive elds, where the connectivity changes
in a way that is di cult to compute. We had prototype systems that used both of these
38

approaches at the workshop, and we experimented with these systems, and compared their
performance and ease of use.
The rst system had three 64x64-neuron AER receiver chips and six 8-bit microcon-
trollers (MicroChip PIC16C55). These microcontrollers work in parallel, with one pair
computing the X and Y addresses for each receiver chip. The throughput was 3M addresses
per second this rate supports an incoming address-event rate of 83K with a fan-out of 5.
This system was rather slow because the microcontrollers operated at only 5MIPS, and
about 10 instructions where executed to perform the handshaking protocol and to add the
o set to the address. The system was also di cult to use, as we had to modify programs
written in assembly language to implement new projective elds.
The second system had four 64x64-neuron AER receiver chips and ve 64k X 16 EPROM
chips. However, this prototype only supported a single outgoing address per receiver chip
for each incoming address. We designed and started building a memory-based system that
supported projective elds at the workshop. The new design uses an EEPROM to store
the projective eld mapping and a microcontroller to perform the handshake protocols
and to sequence through several locations in the EEPROM for each incoming address.
We completed the program for the microcontroller as well as the address mapping for the
EEPROM. Further debugging of the hardware and software will continue between Hopkins
and Boston University.
7.5.11 Cochlea to 1D Receiver and to Koala Robot
Participants: A. Andreou, D. Klein, P.Verschure
(See also the Neurobots workgroup)
The goal of this project was to interface a silicon cochlea to a 1D address-event receiver
chip, and to a small mobile robot (Koala). In spite of our late start, we managed to build
and debug a circuit board. Unfortunately, it took longer than any of us expected to connect
one of the available microphones to the chip. As a result, we could not test the board on
Koala. The project will be continuing and completed in the Baltimore/Washington area
between the Hopkins and the University of Maryland groups.
In conjunction with the 1D-retina to Koala interface, this project provides an interest-
ing opportunity to perform the coordinate transformations required for multimodal sensor
fusion by rerouting address-events. These opportunities will be explored in the coming year
and we wil have more to report and evaluate at next year's workshop.
7.5.12 Neuronal Networks on Silicon Cortex (SCX)
Participants: E. Niebur and A. Whatley
Niebur proposed using the SCX to emulate a small neuronal circuit in which relative
timing of neuronal events is important.
The underlying idea is that the time structure of neuronal spike trains on the scale of a
few milliseconds can be used to transmit information which is complementary to the more
commonly-used average spike rate code. This technique increases the used bandwidth of
the neural information transmission channels signi cantly. In particular, E. Niebur and C.
Koch suggested that the location of the focus of selective attention might be coded in the
temporal structure. In two papers (Niebur, Koch and Rosin, 1993 Niebur and Koch, 1994)
they showed that a simple neuronal circuit consisting of mutually inhibitory interneurons
will suppress unattended stimuli in the presence of attended stimuli. The interneurons are
39

capable of decoding the incoming temporal information (either oscillations or synchronicity
in their input). All neurons are implemented as biologically plausible variants of simple
integrate-and- re neurons.
The SCX project was planned to implement two groups of neurons on two di erent chips.
On chip 1, a simple time structure was modulated on a subset of the neurons which were
designated to code for the attended stimulus. The other neurons on this chip responded to
"unattended" stimuli, which is coded by the same average spike rate they would produce
in the presence of an attended stimulus but without the additional modulation. On chip 2,
the above-mentioned circuit is used to separate attended from unattended stimuli.
After some brief period of familiarization with SCX host commands (aided by the doc-
umentation previously written by Whatley) and with the architecture of the particular
Multi-Neuron Chips (MNCs) available for use at the workshop, Niebur was able to generate
a command le that could be read by the SCX host software to con gure the SCX in the
required way to emulate a small, prototype version of the circuit to be studied. In doing
so, he became the rst user of the SCX system from outside its originating lab. A. What-
ley assisted in taking data from the neuronal circuit implemented, and this data was then
analyzed o -line by Niebur. This analysis showed no signi cant correlation between the
ring of the neurons that were expected to be correlated. This may have been because no
attempt had at that point been made to tune the synaptic weights in the circuit to obtain
the desired behavior. Further investigation was not attempted during the workshop, but
could be pursued in the case that either Niebur visits the Institute of Neuroinformatics in
Zurich where the SCX project is based, or the SCX host software is developed to the point
at which the SCX can be controlled, and data gathered, across the internet (in a manner
analogous to the way in which Jonghan Shin, based at Caltech, is presently able to use the
single silicon neuron test setup in Zurich). This is a possible future development of the SCX
project, though considerable development e ort will be required.
Even if no further progress is made on studying the particular neuronal circuit used in
this project on the SCX system, the experience of having an independent end-user, Niebur,
work with the SCX has been a valuable one. The lessons learned about the level and type
of documentation required, and about the kinds of functions an end-user wants to perform
should help towards Whatley's stated interest in "Making the SCX system into a usable
platform for others to use...".
7.6 Active Vision Workgroup
Orgnizers: Brian Scassellati and Matt Marjanovic
Scassellati and Marjanovic brought an active vision system developed as part of the Cog
Project at the MIT AI Lab (see gure 16). The system consists of four degree-of-freedom,
four color camera head for input and output, and a network of eight TI C40 DSP's for
computation.
The intent of the lab was to give participants some hands-on experience with a functional
and capable active vision system. Brian and Matt developed a number of basic visuomotor
tasks as examples of system capability and algorithmic possibility, and participants were
encouraged to expand upon these examples and develop tasks of their own. Sample tasks
included subtraction-based motion detection, saccading to motion, smooth pursuit tracking,
vergeance, and pixel-level visual adaptation.
With the powerful, general computational hardware available, some participants discov-
ered that the vision system was a useful testbed for digital simulation of algorithms which
40


Figure 16: Active vision system hardware
they intended to later implement using more speci c, but cheaper, analog VLSI technology.
The following projects were implemented on the active vision system while at Telluride:
7.6.1 Local Motion Detection
Participant: R. Beare
Beare implemented a simulation of a new local motion detection architecture that ex-
ploits nonlinear interactions to encode the sign of the change in contrast and compress the
system dynamic range requirements. These detectors only indicate direction of motion.
Beare also completed some of the postprocessing stages that he has been developing
to perform motion based segmentation. This scheme does not require explicit velocity
estimation as is typical. Instead the technique attempts to maintain a representation of
the relationship between simple primitives (in this case motion detector responses). The
following preprocessing steps were implemented during the workshop:
a) Voronoi thresholding - a scheme that performs thresholding using only local informa-
tion and does not require any global image statistics. It also creates a useful representation
of local geometric structure.
b) Delaunay graph formation - the (approximate) Delaunay graph of the thresholded
image is created from the Voronoi diagram. The graph is used to represent the relationships
between important pixels. Tracking of graph elements is the basis for the segmentation
scheme.
7.6.2 Orientation Selectivity
Participant: C. Rasche
Goal: To test a dendrite model for computing orientation selectivity.
Method: The adaptation algorithm leaves an image of the moving objects which decays
according to a selectable time constant. This adaptation image is the input to the dendrite
model. Clusters of synaptic input are orientated in straight lines. Activation of a cluster
occurs by simultaneous stimulation of a preponderance of the synapses. Serial activation
41

of several clusters belonging to the same dendrite integrate in the soma, and contribute to
cell ring (See gure 17).
Orientation Selectivity by a Sigma-Pi
Dendrite Model

synaptic clusters
dendritic branches
soma
Figure 17: Architecture of the dendrite model for computing orientation selectivity
Results: Orientation selectivity for horizontal bars moving to left or right worked for
motion of moderate speeds. Further improvements would involve tuning cluster thresholding
and cluster size.
7.6.3 Smooth Optical Flow
Participant: A. Stocker
Alan create a software implementation of a smooth optic ow algorithm which he plans
to implement in analog VLSI. The algorithm is based on the standard optic ow constraint
equation, which assumes that overall image intensity does not change during the computa-
tional time window. The constraint is solved for each pixel, with in uence from surrounding
pixels, which helps to solve the aperture problem. The pixel in uences are weighted ac-
cording to a Gaussian distribution, incorporating the assumption that neighbouring pixels
are more likely to receive their visual input from the same object than spatially separated
ones.Thehighlyiterativealgorithmprocessed5framespersecond. Thealgorithmwasshown
to work qualitatively, and is expected to bene t from the parallelism inherent in the forth-
coming analog VLSI implementation.
7.6.4 Future Projects
Two future projects were also discussed during the workshop. Giacomo Indiveri and Matt
Marjanovic suggested an implementation of optic ow techniques for motion estimation.
Krishna Shenoy and Brian Scassellati discussed the implementation of a model of eye move-
ments that incorporates compensation for ego-motion through vestibular feedback, retinal
42

slip information, and e erence copy. The implementation of this model requires a vestibular
system, which was not available at the workshop, but does exist on the Cog robot at MIT.
The implementation of this model will occur as a post-workshop collaboration.
7.7 Attention and Selection Discussion Group
Organizer: Ernst Niebur
During the introduction of participants in the workshop it became clear that a signi cant
fraction of them has an interest in selective attention. Therefore, it was decided to organize
an informal discussion group on \Selection and Attention." Some of the group members
had some experience with modeling selective attention (mostly in the visual domain), one
(Horiuchi) had experience with hardware (aVLSI) implementations of attentive selection
mechanisms. The group had two \o cial" meetings (with about 12 and 8 participants,
respectively) and a number of informal discussions between members.
7.7.1 Computational Role of Selective Attention
Much of what is called "information processing" can, in fact, be understood as selection
of a part of the available data. For instance, in the visual domain, edge detection is a
procedure in which only a small (essentially one-dimensional) set of pixels is selected and
all other (a 2-dimensional set) are discarded. While this, and many other center-surround
operations are clearly of eminent importance for image-understanding, it seemed advisable
to limit discussion to a smaller subset of selection processes. As operational de nition,
it was suggested to limit discussion to those selection processes which lead to sequential
processing.
Nevertheless, it turned out that di erent interpretations of the concept of "attention"
turned out to be a major problem. The following are the points that seem to form a common
denominator:
selective attention has bottom-up ("data driven") as well as top-down ("task-dependent")
components
at least one of the important functions of selective attention is the limitation of the
stream of sensory data which is too large to be processed in detail ( ltering)
7.7.2 Neuromorphic Implementation
Although there have been numerous models of attentive mechanisms at multiple levels of
biological realism and complexity, there has been apparently only one VLSI implementation
of covert selective attention, by Morris in collaboration with Horiuchi. One of the interesting
experiences with this model was that the implemented WTA mechanism had to be \slowed"
down to avoid oscillations between multiple candidates for targets.
Topics for future discussions include the question of what are data sets which are suitable
for aVLSI implementation of attentive selection mechanisms. The work of Morris and
Horiuchi is so far limited to the selection of one-dimensional edges. While this should be
suitable for the demonstration of the principles, it seems highly desirable to include richer
sets. In particular, it seems desirable to extend the input data sets to two dimensions in
order to make the selection of shapes available.
43

Nevertheless, the type of output from today's VLSI designs seems to put present a
challenge to selection processes. For instance, it is probably di cult to implement the
behaviorally extremely important (in primates) selection for color . Likewise, selection for
2
shape seems di cult since current aVLSI are close to the sensors while biological shape
recognition is quite far removed from the sensory surfaces
7.7.3 Limits are good for you
One of the insights developed during the discussions was that ltering the data may have
more functions than limiting the amount of data presented to more central processors
for the sake of avoiding information overload. Instead, it was pointed out by Andreou
that preselection may be extremely useful even in cases where availability of processing
power is not a limiting factor. Indeed, he found that speech-understanding models worked
considerably better when they worked on pre-selected data sets. The explanation of this
result is that the used model for speech works better in a smaller part of the state space than
in an unrestricted space. This insight ts well with the observed modularity of biological
sensory systems (best studied in visio but certainly present also in other sensory domains)
in which specialized subsystems transform their output in a form which is suitable as input
for the next stage (note that this does not imply a strictly hierarchical structure).
7.7.4 Winner Take All and Company
A Winner-Take-All (WTA) mechanism is often a suitable description of attentional selec-
tion, at least functionally. The reason is that, at least if motor action is the nal \output"
of the selection, a unique solution has to be found, since eventually a single con guration
of motor actuators has to be achieved. Whether the standard models for WTA are suitable
for modeling and what are alternatives to them was discussed but not resolved.
We also discussed possible alternatives to WTA but did not come to a clear conclu-
sion. It seems that the conceptual simplicity and clear functional role of WTA led all
those participants who did any modeling work to implement one form or another of WTA
mechanisms.
There is, however, a certain number of subspecies of WTA implementations. In particu-
lar, it turned out that the aVLSI implementation by Morris and Horiuchi required to \slow
it down" order to avoid oscillations between multiple potential targets. More speci cally,
these authors introduced a hysteresis function superimposed on the basic WTA algorithm.
The notion of \Winner" was also discussed and it was emphasized that it is usually not
a single pixel (in vision) instead, the size (and possibly shape) of the selected region will
in generally be dependent on the input and the task. Most WTA algorithms do take this
into account.
A variant is the \self-extinguishing" WTA, used e.g. in the attentional model of Niebur
and Koch. Based on psychophysical evidence (\inhibition of return") and functional con-
siderations, these authors provided the WTA mechanism with an inhibitory feedback to the
source of its input and therefore generated an internal dynamics which leads to autonomous
scanning of the visual input scene.
2Some work has been done on color processing with VLSI by Perez and Koch.
44

7.7.5 Selective Attention and Beyond
One heatedly discussed question was that of \what comes after attentional selection?"
Unfortunately, this discussion was essentially held before Andersen's talk a di erent order
might have injected more data. As it was, we focused mainly on the role of memory. It was
suggested the attentional selection process has only very limited capabilities (much along the
lines of Koch and Ullman, 1985) and is only able to \cut out" spatially de ned regions of the
visual eld. Selected items are then \deposited" into short-term memory. This STM may
be endowed with task-dependent mechanisms which operate on the selected regions, much
in the spirit of Ullman's \Visual Routines," but possibly also including object recognition,
determination of spatial relations between selected items etc. These more sophisticated
algorithms (too complex or expensive to be applied to the 'raw' scene before attentional
selection) are then applied to the selected regions.
A related question concerns communication of the attended region (in the case of spa-
tially de ned attention) to the object-recognition pathway. In other words, assuming that
a certain part of the visual eld has been selected, what are e cient mechanisms to com-
municate this result to the pattern recognition mechanisms? A straightforward possibility
is to simple suppress all input except for that coming from the selected region very early
on. This is not the biological solution (it is observed that attentional selections leads to
minimal changes in average ring rates at the level of primary visual cortex) and, indeed,
it has signi cant drawbacks. For instance, to avoid \blindness" outside the selected region,
one would need a second visual system, not a ected by attentional selection. Niebur and
Koch have recently suggested an alternative which de-correlates attentional selection and
coding of sensory input and allows simutaneous preservation of the latter (in the average
ring rate) and the former (in the details of the temporal structure of spike trains). It is
unclear whether this or similar coding schemes could be used for aVLSI implementations of
attentive selection mechanisms. While the Address-Event Representation is promising, not
only from the point of view of allowing the construction of larger systems than are possi-
ble on invidual chips, but also since they allow explicitly the computation with spikes and
thus replace (or, rather, complement) the analog computation used so far, the suggested
synchronicity code may run into problems since it may lead to frequent collisions. Whether
this is a serious limitation or not will have to be decided on the basis of detailed simulations
or implementations.
7.7.6 Critique
Overall, a more structured organizational framework would have been more e cient. In
particular, it would have been preferable if the interests of the participants had been known
in advance. This would have allowed to organize short (e.g. 20 minute long) presentations
of the work the participants had done previously (this was done to some extent but only
on an informal basis between indidvidual members).
Also, it was unfortunate that both of the two \o cial" presentations on attention (by
Koch and Andersen) fell in the second half of the last week. Of course, some things have
to come last, but (and this relates to the previous paragraph), if it had been known that
there was considerable interest in selective attention, a more formal discussion group could
have been organized with a brief tutorial at the beginning. This way, at least a common
language would habe been available from the beginning.
45

7.8 Auditory Processing Discussion Group
Organizer: Te-Won Lee
The project combined two lines of research: The auditory scene analysis (Shamma)
and blind source separation related to the Cocktail party e ect. The workshop included
lectures in Auditory Physiology I & II (Shamma), Audition in aVLSI (Andreou) and matlab
demonstrations on the auditory scene analysis.
The new idea has been inspired by the auditory physiology discussion and the auditory
spectrum analysis toolkit developed at Shamma's laboratory. The spectrum is computed
with a bank of cochlea lters which properties have been derived imperially. However, given
this representation, one open question is how the information about the auditory spectrum
is further processed. We know that the auditory system is highly e cient and a person is
capable of focusing on one signal source while several noise signals may interfere (cocktail
party e ect). Hence, is there an e cient coding algorithm that might be revealed in the
auditory system? A possible answer that relates to the separation of unwanted signals is
redundancy reduction via information maximization that leads to Blind Source Separation
(BSS) by Independent Component Analysis (ICA). Therefore, the basic idea is to perform
ICA on the auditory spectrum. This approach may give us insights about the computational
e ciency of the cochlea lters and neural coding strategy from the auditory spectrum.
Traditionally, ICA algorithms have been developed on the time domain representation.
The reason why ICA in the frequency domain has not been used so far are as follows: 1) The
frequency representation involves complex numbers and a simple neural activation function
for complex numbers is unde ned. 2) The FFT introduces nonlinear transformation that
the linear ICA algorithm may not be able to solve.
Problem 1 can be avoided by using a Complex Neural Network (CNN). The CNN with
two sigmoidal activation function (real and imaginary) is an alternative equivalent to the
sigmoidal activation function in the time domain. For problem 2, since the FFT is applied to
the whole dataset x we observe that the nonlinear transfer function a ects every data point
xi so that the linear mixing ratio is valid in the FFT representation of the data. Therefore,
we can apply the slightly modi ed learning rule for ICA on the frequency representation of
the data.
Figure 18 shows an example of a computer simulation experiment where we mixed two
signals from two di erent instruments (cello and piano). After one iteration through the
data the unmixing matrix is found and the signals are separated in the frequency domain
and reconstructed using the inverse FFT.
For performing ICA given the auditory spectrum we have to encounter the following
problem: The cochlea lters are ordered in a way so that the frequency axis in the frequency-
time representation (auditory spectrum) is not in a linear scale but in log-scale. The log-scale
representation however, introduces a new nonlinearity for the ICA analysis that changes
the statistic of the signals and does not allow a simple solution with the modi ed ICA
algorithm. This point remains an open question and requires further investigation until we
can speculate about e cient coding strategies using cochlea lters. However, my assumption
is that adaptive nonlinearities for the ICA formulation can match to the imposed (log-scale)
nonlinearity that maximum information ow is guaranteed. Then, we will be able to perform
ICA on the standard auditory spectrogram. Further progress will be reported when new
results are available.
Apart from the auditory processing workshop there have been diverse discussions with
other researchers with the following conclusions on future research:
46

original 1
original 2
1
1
0.5
0.5
0
0
−0.5
−0.5
−1
−1
0
2
4
6
8
0
2
4
6
8
mixture 1
4
mixture 2
4
x 10
x 10
2
1
1
0.5
0
0
−1
−0.5
−2
−1
0
2
4
6
8
0
2
4
6
8
Recovered source 1
4
Recovered source 2
4
x 10
x 10
0.02
0.04
0.01
0.02
0
0
−0.01
−0.02
−0.02
−0.04
0
2
4
6
8
0
2
4
6
8
4
4
x 10
x 10
Figure 18: Reconstruction of two separate auditory signals (cello and piano) from a mixed
input signal
1. Optimization of the ICA algorithm to deal with time-delays and convolved signals.
(Jayadeva)
2. Building an ICA chip in aVLSI for separating room recordings (Cauwenberghs)
3. Separation olfactory sensor data into independent component. (Kauer)
7.9 Locomotion Discussion Group
Organizer: Avis Cohen
This group consisted of roughly 10 people, most of whom were highly active. The group
met nightly for the last two weeks of the workshop. The goal was to share the problems and
solutions involved in building locomoting robots. We began with presentations of work done
prior to the meeting, and continued with discussions of how one would model a complete
system from neurons to the mechanically implemented system. We also had presentations
and discussions of work begun at the meeting. Several projects both at the meeting and for
future work emerged as a result of the interaction among the participants:
7.9.1 Walking Robot
Participant: C. Higgins
(See also robots workgroup)
This group built a robot with Mark Tilden and Suzanne Still's oscillator circuit. It had
both photo and tactile sensors, but lacked the ability to turn in response to them. He hopes
to add a motion chip to the robot.
47

7.9.2 Walking Robot with aVLSI CPG controller
Participants: G. Cymbalyuk, G. Patel, and S. Still
(See also robots workgroup)
This group combined Patel's Morris-Lecar aVLSI oscillatory neurons with Still's robots.
They achieved a walking robot in which the phase di erences among the legs could be
changed to produce di erent gaits. This will be expanded by Still and combined with
oating gate technology to achieve an autonomous robot with capacity to learn its coupling
and spontaneously alter its gaits.
7.9.3 Theory demonstrated by aVLSI hardware
Participants: G. Cymbalyuk, G. Patel, and A. Cohen
This group tested the claim that weak inhibitory coupling could produce both in-phase
and out-of-phase oscillations in a physical system. Using Girish's aVLSI oscillatory neurons,
they demonstrated that this claim is true. This is resulting in a short note submitted to
Neural Computation.
7.9.4 System Implementation using Khepera Robots
Participants: P. Vershure and A.Cohen
This group used the Khepera system to test the interaction between the slowly decaying
excitation that is elcited by stretch receptors in the lamprey spinal cord. They demonstrated
that this positive feedback when connected to a standard 6 cell model for the spinal oscil-
lator, does not lead to runaway. The system stabilizes at a higher frequency than baseline,
but can tolerate a full range of feedback strengths without losing stability.
7.9.5 Future Projects
Tilden is planning to build a six segment lamprey model. He will interact with Cohen to
achieve biological plausibility and e ciency. Tilden had to leave the workshop prior to
beginning this work, but hopes to complete it this fall.
7.10 Motion Discussion Group
Organizer: Chuck Higgins
The purpose of this discussion group was to learn in more detail about motion research
in progress, exchange research ideas and problems, and discuss issues in motion processing.
With 5 meetings and about 20 people in attendance, we were one of the largest groups.
Seven people talked about their work and got feedback from a wide range of viewpoints.
Heated discussion on motion chip design strategies and the future role of neuromorphic
motion processing raised issues we weren't completely able to resolve, but brought a diverse
range of backgrounds into the argument. In collaboration with other groups, at least 6
motion projects were completed at the workshop by group attendees.
7.10.1 Talks
At our rst meeting, the rst talk was given by Chuck Higgins on a direction-of-motion array
recently fabricated. The speaker's intention to integrate the local optical ow to solve the
aperture problem led to discussion on the Fourier analysis of 2-D motion stimuli, 2-D versus
48

PROS
CONS
fast (not subject to clocking) low ll factor
continuous-time
high complexity
no temporal aliasing
high mismatches
random access
lower resolution
biologically plausible
uses more area
may be scalable
harder to do long-range conn's
requires less area for storage high o sets
Table 1: Parallel versus serial motion processing in focal plane arrays
1-D motion detection, and motion segmentation. A demo of the speaker's chip followed.
The second talk was given by Marwan Jabri, who presented an early physiologically-based
model of direction selectivity in the deep layers of superior colliculus. It was pointed out to
the author that a reversal of contrast reversed the output direction of motion of his model.
Between our rst and second meeting, Jorg Kramer and Reid Harrison gave morning
talks about their respective motion chips, describing feature-based motion processing chips
and a Reichardt-based model.
At our second meeting, Ralph Etienne-Cummings began by describing his past work on
feature-based velocity chips, culminating with a large foveated tracker for which he showed
an impressive demo video. He then continued with his most recent work, which involves
serial processing of motion on the periphery of a chip to obtain a high resolution and ll
factor for imaging. This led to a discussion of serial versus parallel motion processing which
continued in our fourth meeting. Our second speaker was Alan Stocker, who described his
plans to extend the original Tanner-Mead chip by adding a resistive network and possibly
contrast-mediated resistive fuses in an attempt to obtain motion integration only over a
segmented object.
At our third meeting, Titus Neumann told us about his work in egomotion detection for
a ying helicopter platform. Using an array of Reichardt detectors on certain 1-D subregions
of the image, his algorithm detects rotation, translation and expansion/contraction of the
optical ow eld. He showed an impressive software demo of the algorithm using a CCD
camera. Our second speaker was Richard Beare, who suggested a new shunting inhibition
model for direction selectivity. This led to a discussion of rst versus second order motion
stimuli. Richard also showed a software demo (written at the workshop) of his algorithm
running on the Cog head system.
At our fourth meeting, our discussion of serial versus parallel motion processing (led
by Ralph Etienne-Cummings) continued. The result of this discussion is outlined in Table
1. Additionally, Ali Moini told us about the series of insect-inspired motion chips he has
worked on in the \Bugeye" group at Adelaide University.
Our nal meeting was held at lunch on the subject of \The Future of Neuromorphic
Motion Processing." This was a lively discussion covering options for hardware and software
motion processing and the bene ts and de cits of analog neuromorphic chips. We basically
decided that, for modeling purposes, there has yet to be a motion system which could not
be simulated in real-time on a DSP chip. For applications, we concluded that DSPs can
be used only where the low-power, low-weight advantages of neuromorphic analog chips are
not important.
49

7.10.2 Projects
In collaboration with the Sensorimotor workshop:
Mark Blanchard interfaced a 1-D retina chip to a Khepera robot and a Matlab
simulation to implement his locust LGMD model.
Titus Neumann helped interface Jorg Kramer's 1-D velocity sensors to a Koala
robot for line tracking.
Ralph Etienne Cummings and Martin Lades worked on interfacing Ralph'sfoveated
tracking chip to a pan/tilt head they brought.
Ali Moini interfaced one of Ralph's motion detector chips to a wheeled vehicle for
line tracking.
In collaboration with the Active Vision workshop:
Richard Beare implemented the shunting inhibition motion algorithm he talked to
us about on the Cog head in real-time.
Alan Stocker implemented the constraint-solving motion algorithm he talked to us
about on the Cog head.
7.10.3 Goals for Next Year
Since we concluded that there is not yet a motion modelling system which could not be
simulated on a DSP chip, we decided that we must commit ourselves to produce motion
modelling systems which do more than low-level optical ow. Some form of motion inte-
gration products most be produced.
On the application side, we decided that we must commit ourselves to producing systems
which clearly demonstrate the singular low-power low-weight advantages of neuromorphic
motion processing. Autonomous vehicles and robots are the suggested application area.
Also on the application side, a more complete knowledge of the commercial digital motion
chips available is necessary for a true evaluation of practical neuromorphic motion chips.
Andreas Andreou has volunteered to do this research in time for next year's workshop.
7.11 \Moving Towards Industry" Discussion Group
Organizer: Orly Yadid-Pecht
In this session we tried to bring people covering the spectrum from industry, government
lab and academia to talk about their experience with neuromorphic engineering. We tried
to understand what kind of research projects would be appealing to industry and what is
the best setup for collaboration with industry. We wanted to analyze successes and failures
and see what could be learned from this experience. The areas covered were olfactory
and vision, with emphasis on vision since this is the area of interest of the majority of
researchers in this workshop. All researchers invited to talk have had some experience
with industry. On one extreme there were active industry participants (Morris) while on
the other there were "neurobiologists of silicon" (Koch, Indiveri and Kramer). The list of
speakers included: Wilson (Univ. Kentucky) Morris (Intel) Liu (Caltech) Andreou (Johns
Hopkins) Yadid-Pecht (JPL) Koch (Caltech) Indiveri (INI/Zurich) Kramer (INI/Zurich)
50

7.11.1 Summing up the main points of the discussion
Wilson talked about applications of neuromorphic engineering to olfaction and arti cial
chemical sensing systems. Commercial markets have a large de cit in chemical sens-
ing systems in the low to medium cost consumer and manufacturing applications.
This de cit is caused in large part by the inability to extract su cient accuracy from
low cost chemical sensing technologies to adapt them to these applications. Many
of these sensing technology issues can be overcome by using large arrays and redun-
dancy of sensors, an approach that requires signi cant signal processing and careful
attention to architecture. Neuromorphic engineering is well suited to approaching the
chemical sensing problem in this manner, since biology also works with inherently
inaccurate and variable sensors, yet in a well-designed signal processing architecture
and sensor organization scheme, overcomes these inaccuracies to achieve a high level
of discrimination capability and sensitivity.
Morris Intel has speci c goals since its main market is the PCs. If Intel would support
some neuromorphic engineering it has to be something that ties to the PC, cannot be
done by the PC but must be competitive with PC capabilities in 5 years range. There
are other issues that are not covered in academia but are of importance to industry
in general: performance measures, inexpensive testing, high yield (it has of course to
be manufactured in the company's process). Her personal view is that there is chance
for success for certain focal plane processing.
Liu talked about her experience with Rockwell. She worked on a project, which included
a neuromorphic retina, and its purpose was to count cars at tra c lights. The project
funded more than 10 people a year, and was an engineering success, but a marketing
failure that had nothing to do with the nature of the project. Her bottom line:
Products success does not merely rely on the engineers - marketing is important
perhaps even more.
Andreou believes that toys are a good application to work on! He wants to emphasis his
view that this is an exciting technology with a bright future!
Yadid-Pecht talked about her experience at JPL. There are system constraints that
have to be taken into account, and only then the research solution you propose will
have a higher success rate => You have to be creative working within the system
requirements! An example for success is the wide dynamic range active pixel sensor,
which will be presented in a special talk.
Koch emphasized that from his experience neuromorphic engineering will have a chance
for success in a wide variety of special purpose applications, where extraction of infor-
mation, as opposed to very dense visual arrays, is required. He believes that analog
VLSI will be able to match 512x512 CCD technology for imaging in the next 10 years,
when neighborhood processing abilities will be placed on chip. He also reported on
new models for funding that are probably best suitable for neuromorphic engineering:
Technology transfer - for general technology development.
Indiveri talked about his experience with automotive industry. The relative part of elec-
tronics in the car is exponentially increasing. There is place for many small, local
51

control devices for various tasks in the car. For success he believes that it is impor-
tant to have close connection between the research and the product - a good example
is the CSEM and Logitech mouse.
Kramer reported his ideas on how to improve success ratio, based on his experience
working with Prof. Peter Seitz. Have a 1:1 collaboration with small/medium sized
companies (as opposed to projects with several partners) present a "cool" demon-
stration and work in a "consulting" mode, i.e. show how to do it, and let them gure
out the rest! The main issues are that it has to be cheap to produce, reliable under
adverse conditions.
7.11.2 Message to take home
Involvement with industry has substantial overheads and some dangers (e.g. being drawn
into a tight development cycle not appropriate for universities), but also potential as an
alternative funding source mechanism for transfer of ideas into the real world. This research
is still in its embryo phase, and perhaps is not yet ready for industry development in a wide
scale.
52

References
1]
S. Still and M.W. Tilden, \Biologically analogous controller for 4-legged robot
locomotion", Submitted to NIPS 1997, Denver, Colorado
2]
Indiveri, G. and Verschure P., \Autonomous vehicle guidance using analog VLSI
neuromorphic sensors", International Conference on Arti cial Neural Networks
ICANN97 8-10 October 1997, Lausanne, Switzerland
3]
Kramer, J., Sarpechkar, R. and Koch C., \Pules-based analog VLSI velocity sen-
sors", IEEE Trans. Circuits and Systems II, vol. 44 (1997) 86-101.
4]
Niebur, E., Koch, C. and Rosin, C., \An oscillation-based model for the neural
basis of attention", Vision Research, vol. 33 (1993) 2.
5]
Niebur, E. and Koch, C., \A model for the neuronal implementation of selec-
tive visual attention based on temporal correlation among neurons", Journal of
Computational Neuroscience, vol. 1 (1994) 141-158.
53

8 Appendix 1: Participants of the 1997 Workshop
Alphabetic list of everybody present at the workshop.
----------------------------------------------------------------------------
Organizers:
* Rodney Douglas(INI/Zurich) -
* Christof Koch (Klab/Caltech) -
* Terry Sejnowski (CNL/Salk Institute,UCSD) -
----------------------------------------------------------------------------
Our Telluride Summer Research Center Liason
* Dan Whittet -
----------------------------------------------------------------------------
Technical Personnel:
* Dave Flowers, (Klab/Caltech) -
* Reid Harrison, (Klab/Caltech) -
* Chuck Higgins, (Klab/Caltech) -
* Giacomo Indiveri, (INI/Zurich) -
* Joerg Kramer, (INI/Zurich) -
* Dave Lawrence, (INI/Zurich) -
* Theron Stanford, (Klab/Caltech) -
----------------------------------------------------------------------------
Participants
* Richard Andersen , Caltech -
* John Anderson, Univ. Sussex -
* Andreas Andreou, Johns Hopkins -
* Richard Beare, Univ. Adelaide -
* Mark Blanchard, Newcastle -
* Eric Blom, Univ. Kentucky -
* Kwabena Boahen, Univ. Pennsylvania -
* Gert Cauwenberghs - ECE/Johns Hopkins -
* Avis Cohen, CNACS/Univ. Maryland, College Park -
* Jordi Cohen, Univ. Maryland -
* Gennady Cymbalyuk, Russian Academy of Sciences -
* Tobi Delbruck, Arithmos -
* Itiel Dror, Miami University, Ohio -
* Thomas Edwards, Univ. Maryland -
* Ralph Etienne-Cummings, EE/SIU -
* Fred Hamker, Tech. Univ. Ilmenau -
* Paul Hasler, Carverland/Caltech -
* Todd Hinck, Boston Univ. -
* Tim Horiuchi, Johns Hopkins -
* Marwan Jabri, SEDAL/Univ. Sydney -
54

* Jayadeva, Indian Inst. of Tech. - ,

* John Kauer, Tufts -
* Garrett Kenyon, Univ. Texas Medical School -

* David Klein, Univ. Maryland -
* Martin Lades, ISCR/LLNL -
* Stefano Lazzari, Univ. Pavia -
* Te-Won Lee, Salk Institute -
* Shih-Chii Liu, Carverland/Caltech/Rockwell -
* Markus Loose, Univ. Heidelberg -
* Jordi Madrenas, Univ. Politecnica de Catalunya -
* Matt Marjanovic, AI Lab/MIT -
* Alireza Moini, Univ. Adelaide -
* Tonia Morris, Intel -
* Tom Morse, Univ. Oregon -
* Predrag Neskovic, Brown Univ. -
* Titus Neumann, MPI Biol. Cybernetics, Tuebingen -

* Ernst Niebur, Johns Hopkins -
* Girish Patel, NMS/Georgia Tech -
* Adrienne Raglin, ARL/Georgia Tech -
* Christoph Rasche, INI/Zurich -
* Brian Scassellati, AI Lab/MIT -
* Tim Schoenauer, Tech. Univ. Berlin -
* Shihab Shamma, Univ. Maryland -
* Krishna Shenoy, Caltech -
* Susanne Still, INI/Zurich -
* Alan Stocker, INI/Zurich -
* Mark Tilden, Los Alamos National Labs -
* Paul Verschure -
* Xu (Shaw) Wang, Caltech -
* Adrian Whatley, INI/Zurich -
* Denise Wilson, Univ. Kentucky -
* Orly Yadid-Pecht, JPL/Caltech -
55

9 Appendix 2: Workshop Announcement
This announcement was posted on 1/22/97 to various mailing lists and to our dedicated
Web-Site.
"NEUROMORPHIC ENGINEERING WORKSHOP"
JUNE 23 - JULY 13, 1997
TELLURIDE, COLORADO
Deadline for application is April 1, 1997.
Christof Koch (Caltech) Terry
Sejnowski (Salk Institute/UCSD) and
Rodney Douglas
(Zurich, Switzerland)
invite applications
for a
three-week summer workshop that will be held in Telluride, Colorado in
1997.
The 1996 summer workshop on "Neuromorphic Engineering", sponsored by
the National Science Foundation, the Gatsby Foundation and by the the
"Center for Neuromorphic
Systems Engineering" at Caltech, was an
exciting event and a great success. A detailed report on the workshop
is available at http://www.klab.caltech.edu/~timmer/telluride.html
GOALS:
Carver Mead introduced the term "Neuromorphic Engineering" for a new
field
based on the design
and fabrication of artificial neural
systems, such as vision systems, head-eye systems, and roving robots,
whose
architecture and design principles
are based on those
of
biological nervous systems. The goal of this workshop is to bring
together young
investigators and more established researchers from
academia
with their
counterparts
in industry
and
national
laboratories, working on both neurobiological as well as engineering
aspects of sensory systems and sensory-motor integration. The focus of
the workshop will be on "active" participation, with demonstration
systems and hands-on-experience for all participants.
Neuromorphic
engineering has a wide range
of applications from
nonlinear adaptive control of complex systems to the design of smart
sensors. Many of the fundamental principles in this field, such as the
use of
learning methods and the design
of parallel hardware, are
inspired by biological systems. However, existing applications are
modest and the challenge of scaling up from small artificial neural
networks and designing completely
autonomous systems at the levels
achieved by biological systems lies ahead. The assumption underlying
this three week workshop is that the next generation of neuromorphic
systems would benefit from closer attention to the principles found
through experimental and theoretical studies of brain systems.
FORMAT:
The three week
summer workshop will include background lectures,
56

practical tutorials on aVLSI design, hands-on projects, and special
interest groups. Participants are encouraged to get involved in as
many of these activities as interest and time allow.
There will be two lectures in the morning that cover issues that are
important to the community in general. Because of the diverse range of
backgrounds among the participants, the majority of these
lectures
will be tutorials, rather than detailed reports of current research.
These lectures will be given by invited speakers. Participants will be
free to explore and play with whatever they choose in the afternoon.
Projects and interest groups meet in the late afternoons, and after
dinner.
The aVLSI practical tutorials will cover all aspects of aVLSI design,
simulation, layout, and testing over the workshop of the three weeks.
The first week covers basics of transistors, simple circuit design and
simulation. This material is intended for participants who have no
experience with aVLSI. The second week will focus on design frames for
silicon retinas, from the silicon
compilation and layout of on-chip
video scanners, to building
the peripheral boards necessary for
interfacing aVLSI retinas to video output monitors. Retina chips will
be provided. The third week will feature a session on floating gates,
including lectures on the
physics of tunneling and injection, and
experimentation with test chips.
Projects that are carried out during the workshop will be centered in
a
number of groups, including active vision, audition, olfaction,
motor
control, central
pattern generator,
robotics, multichip
communication, analog VLSI and learning.
The "active perception" project group will emphasize vision and human
sensory-motor coordination. Issues to be covered will include spatial
localization and constancy, attention, motor planning, eye movements,
and the use of
visual motion information for
motor control.
Demonstrations will include a
robot head active vision
system
consisting of a three degree-of-freedom binocular camera system that
is fully programmable.
The "central pattern generator" group will focus on small walking
robots. It will look at characteristics
and sources of parts for
building robots, play with
working examples of legged robots, and
discuss CPG's and
theories of nonlinear oscillators for locomotion.
It will also explore the use of simple aVLSI sensors for autonomous
robots.
The "robotics" group will use robot arms
and working digital vision
boards to investigate issues of sensory motor integration, passive
compliance of the limb, and learning of inverse kinematics and inverse
dynamics.
The "multichip communication" project
group will
use existing
interchip communication
interfaces to
program small networks of
artificial neurons
to
exhibit particular behaviors
such as
amplification, oscillation,
and
associative memory.
Issues
in
57

multichip communication will be discussed.
PARTIAL LIST OF INVITED LECTURERS:
Andreas Andreou, Johns Hopkins.
Richard Andersen, Caltech.
Dana Ballard, Rochester.
Avis Cohen, Maryland.
Tobi Delbruck, Arithmos.
Steve DeWeerth, Georgia Tech
Rodney Douglas, Zurich.
Paul Hasler, Caltech.
Christof Koch, Caltech.
John Kauer, Tufts.
Shih-Chii Liu, Caltech and Rockwell.
Stefan Schaal, Georgia Tech
Terrence Sejnowski, UCSD and Salk.
Shihab Shamma, Maryland.
Mark Tilden, Los Alamos.
Paul Vershure, Zurich.
Paul Viola, MIT.
LOCATION AND ARRANGEMENTS:
The workshop will take place
at the "Telluride Summer
Research
Center," located in the small town of Telluride, 9000 feet high in
Southwest Colorado, about 6 hours away from Denver (350 miles) and 5
hours from Aspen. Continental and United Airlines provide many daily
flights directly into Telluride. Participants will be housed in shared
condominiums, within walking distance of the Center. Bring hiking
boots and a backpack, since Telluride is surrounded by beautiful
mountains (several mountains are in the 14,000 range).
The workshop
is intended to be
very informal
and hands-on.
Participants are not required
to have had previous experience in
analog VLSI circuit design, computational or machine vision, systems
level neurophysiology
or
modeling the brain
at
the systems
level. However, we strongly encourage active researchers with relevant
backgrounds from
academia, industry and national laboratories
to
apply, in particular if they are prepared to talk about their work or
to bring demonstrations to Telluride (e.g. robots, chips, software).
Internet access will be provided. Technical staff present throughout
the workshops will assist with software and hardware issues. We will
have a network of SUN workstations running UNIX, MACs and PCs running
LINUX (and windows).
We have funds to reimburse
some participants for
up to $500 of
domestic travel and for all housing expenses. Please specify on the
application whether such financial help is needed.
Unless
otherwise arranged with one
of the organizers, we expect
58

participants to stay for the duration of this three week workshop.
HOW TO APPLY:
The deadline for receipt of applications is April 1, 1997.
Applicants should be at the
level of graduate students or above
(i.e. post-doctoral fellows, faculty, research and engineering staff
and the equivalent positions in industry and national laboratories).
We actively encourage qualified women and minority candidates to
apply.
Application should include:
1. Name, address, telephone, e-mail, FAX, and minority status (optional).
2. Curriculum Vitae.
3. One page summary of background and interests relevant to the workshop.
4. Description of special equipment needed for demonstrations that could be
brought to the workshop.
5. Two letters of recommendation
Complete applications should be sent to:
Prof. Terrence Sejnowski
The Salk Institute
10010 North Torrey Pines Road
San Diego, CA 92037
email:
FAX:
Applicants will be notified around May 1, 1997.
59