Welcome to The Neuromorphic Engineer
Technologies » Sensors

EDITORIAL
What to feed the senses?

PDF version | Permalink

Sunny Bains

1 March 2007

A search for specific bandwidth numbers reveals more questions than answers.

I've been thinking about getting information into the human brain. A feature1 for Wired magazine I finished recently discusses ways to feed in new (and potentially strange) kinds of information through the periphery: so, for instance, I got to try a 'tongue display’ where images acquired by a camera on my forehead were fed into an array of pixels passing small currents through my tongue. I also got to use a haptic vest designed to help pilots understand their spatial orientation. I wanted to understand how much information the senses could handle (no point in supplying more), how much the brain could handle (without confusion), and how signals should be encoded.

I won't go into the details: you can check out the article or look at my blog2 if you're interested. But a number of interesting questions came up during my research: questions that the neuromorphic community may either be interested in or be able to help me with.

The first thing I found fascinating was how knowledge of sensory bandwidth—which seems like it should be crucial to all engineering in this area—seemed very sketchy. For instance, one recent paper3 says the retina has about the same bandwidth as ethernet, 10Mb's. But it doesn't seem to relate the image coming in through the eye to that being passed through to the brain. In particular, nowhere in the paper does it suggest the retina might be doing some kind of compression, which seemed to me like an important issue (even if only to address and dismiss). Also, I practically begged a rearcher in tactile displays at the University of Madison (who worked on both tongue and fingertip displays) to tell me where I could get figures for tactile bandwidth. In his opinion, it was a meaningless question. I certainly couldn't find any useful literature on the subject myself: not for this or any of the other senses I was looking at.

The other main issue that started to intrigue me was attention. I know this makes me slow on the uptake, since this has been a ‘hot topic’ for a long time, but I had no particular reason to be deeply interested until now.

Specifically, I've become intrigued by two things: how attention is split up within a particular sense, and among the senses. My interest came from the fact that the tongue-display system I used, which was intended to help people with macular degeneration, felt very ‘visual’. My memories of using the device (blindfolded) are not of feeling sensations in the tongue, but instead of seeing a black- and-white low-resolution world. I'm told that this ‘visual’ feeling is probably due to the fact that the information from the information is feeding into the visual part of the extrastriate cortex, the bit involved with mental imagery. Which begs a question: just how much imagery can a person handle, and does it matter where that imagery comes from?

An interesting development that relates a little to this is the work of some military researchers investigating the advisability of feeding different images—say a close-up of a sniper and a view of the whole building or scene—into left and right eyes. You can read the work yourself, but the bottom line is that it doesn't work well: performance and reaction times drop because the brain doesn't seem to be able to take it all in.4

There are many theories of attention, of course, but one presented by a colleague of mine here at Imperial College London recently seemed very persuasive.5 (Even though he used the ‘C’ word, consciousness, when he described it.) He presented new work neuromodelling something called the global workspace theory to show how different sensory inputs can compete with each other to produce psychological phenomena that we know take place, and which seems to have biological plausibility. Of course, it’ s still far from answering the engineering question, ‘Exactly how much can we usefully put in?’

One last thing I've been wondering about (to no avail so far), is whether the form of an incoming signal matters as much as its source. Specifically, if ‘visual’ information (images of remote objects, rather than those on the 2D periphery of the body) comes in through the tongue, how is the bit of the brain that deals with the tongue equipped to decipher it? Does it get help from some bit of the visual cortex (perhaps a higher-level part) or does it just figure out how to process images?

Any answers or leads would be much appreciated: and I promise to share them!




Author

Sunny Bains
Editor, The Neuromorphic Engineer
http://www.sunnybains.com


References
  1. Sunny Bains, Mixed feelings, Wired 15.4 April, 2007. http://www.wired.com/wired/archive/15.04/esp.html

  2. http://www.sunnybains.com/blog

  3. Kristin Koch, How Much the Eye Tells the Brain, Current Biology 16 (14), 2006.

  4. David Curry, Dichoptic image fusion in human vision system, Proc. SPIE 6224, 2006.

  5. Murray Shanahan, A Spiking Neuron Model of Cortical Broadcast and Competition, Consciousness and Cognition, 2007. In press.


 
DOI:  10.2417/1200703.0044




Tell us what to cover!

If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.