Welcome to The Neuromorphic Engineer | ||||
|
Applications » Robotic
Adaptive, brain-like systems give robots complex behaviors PDF version | Permalink Despite recent advances in computational power and memory capacity, realizing brain functions that allow for perception, cognition, and learning on biological temporal and spatial scales remains out of reach for even the fastest computers. By contrast, these functions are easily achieved by mammalian brains. For example, a rodent placed in a water pool can find its way to a submerged platform using visual cues to self-localize its position and reach a learned safe location. Even a best-case extrapolation for implementing such behavior at a functional level using an artificial brain based on conventional technology would consume several orders of magnitude more power and space than its biological counterpart. Clearly, the computational principles employed by a mammalian brain are radically different from those used by today's computers. Classical implementations of large-scale neural systems in computers use resources such as central processing unit (CPU) and graphics processing unit (GPU) cores, mass memory storage, and parallelization algorithms. Designs for such systems must cope with power dissipation from data transmission between processing and memory units. By some estimates, this loss is millions of times the power required to actually compute, in the sense of creating meaningful new register contents. Such a high transmission loss is unavoidable as long as memory and computation are physically distant. The creation of an electronic brain stuffed into the volume of a mammalian brain is thus impossible via conventional technology. The Defense Advanced Research Projects Agency (DARPA)-sponsored Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project is looking for hardware solutions that reduce power consumption by electronic synapses to achieve memory density of 1015 bits per square centimeter. One approach is based on memristive devices. The memristor, initially theorized by University of California, Berkeley Professor Leon Chua1 and later discovered by HP Labs,2 has the unique property of remembering its stimulation history in its resistive state. It does not require power to maintain its memory, making it ideal for implementing dense, low-power synapses supporting large-scale neural models. The challenge is to build a software platform able to exploit the memristor's capacities. This platform, named Cog ex Machina3 (Cog), is being developed at Hewlett-Packard by Greg Snider. Cog abstracts away the underlying hardware and allocates processing resources by computational algorithms based on CPU/GPU availability. Cog exposes a programming interface that enforces synchronous parallel processing of neural data encoded as multidimensional arrays (tensors). Our Modular Neural Exploring Traveling Agent (MoNETA) project,4 supported by DARPA/SyNAPSE via a subcontract with HP, uses Cog to progressively implement complex, whole-brain systems able to leverage the power of memristive hardware that is yet to be designed. MoNETA is the brain of an animat, a neuromorphic agent autonomously learning to perform complex behaviors in a virtual environment. It combines visual scene analysis, spatial navigation, and plasticity. The system is intended to replicate a rodent's learning to swim to a submerged platform in the Morris water maze task4 (see Figures 1a, 1b), a behavior that involves cooperation among several brain areas. The MoNETA brain will eventually implement many cortical and subcortical areas that will allow an animat or robot to engage with a virtual or real environment. Figure 1. a) Morris water maze and b) Modular Neural Exploring Traveling Agent (MoNETA) virtual environment. We prepared a proof of concept in a robotic platform (iRobot Create) controlled by a simplified Cog-based MoNETA brain (see Figure 2). The robot learned to avoid green and approach red objects based on associated reward values. The robot learned not to revisit objects even if they were attractive. This seemingly simple task involved orientation modeling towards a goal, navigation, object avoidance, sensory processing, motor control, and adaptive learning. It used parallelizable computational threads and tensor data representation results in solutions similar to biological brains, such as layered architecture, parallel processing pathways (for example, what and where pathways), visual-image segmentation, and attentional drive (see Figure 3). Figure 2. The iRobot Create platform. Figure 3. Conceptual diagram of the iRobot Brain. Cog is a scalable, powerful platform for neuromorphic computations that will soon make possible implementation of brain models such as MoNETA comparable in size, power, and behavioral complexity with biological brains. In the context of the SyNAPSE project, we will continue developing large-scale, multi-system neural models to be executed on high-density, low-power neuromorphic hardware. We will test these models in increasingly complex virtual environments as well as on robots with the target of replicating classic experimental results from the rodent behavioral neuroscience literature. References
|
Tell us what to cover!
If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.
|
||
|