Welcome to The Neuromorphic Engineer | ||||
|
Embodied cognition: the other morphology PDF version | Permalink The very term ‘neuromorphic engineering’ implies that the structural details of neural circuits cannot be abstracted away in digital simulations: in other words, morphology (the form and function of biological neurons) matters. In embodied cognition,1 this message is also being proclaimed: it is now recognized that the agent's physical body is plays a major role in intelligent action, and cannot therefore be abstracted away. One of the major challenges for both fields is to identify which physical aspects of the biological system under study—such as a neural circuit—support adaptation in the animal, and which are simply the result of historical accident or fulfill some other purpose besides intelligent action. For example, when considering the physical morphology of animals, the color of their fur may allow them to evade predators, but is less relevant to intelligent action than the tendon system that allows for complex locomotion. As has been argued, self-motion is one of the foundations for intelligent action in particular, and intelligence in general.2 In the same way that an appreciation for the physical behavior of neurons is driving the creation of novel hardware in neuromorphic engineering, so too is the appreciation of the role of the body in intelligent action driving the development of novel robot hardware. In recent work3 we demonstrated an autonomous robot that exploits one of the fundamental properties that embodied agents possess: current actions influence future sensor values, an insight that is explored in more depth elsewhere.4 By moving and then recording the resulting sensor data, a robot can use such motor-sensor data pairs as training data to infer a model of the environment, but the data can also be used to create internal simulations of the robot's own morphology. Figure 1. An example of an autonomous robot that integrates self-modeling and internal behavior generation. The robot executes an action (A), and then creates a set of models to describe the result of that action (B). It then uses the models to find a new action (C) that will reduce the uncertainty in the models when executed. By alternating between modeling and testing, it eventually generates an accurate model, and uses it to internally optimize a behavior (D) before executing that behavior in reality (E, F). (Courtesy of AAAS/Science.) We have shown that by creating such simulations, or self-models, a robot may rehearse potentially dangerous actions using a self-model before attempting those actions in reality. In addition, if there is an unexpected change to the robot's morphology—such as body damage—the modeling process automatically re-structures the self-models and again uses them to generate a novel controller that compensates for the inferred injury. This competency is not simply a result of the robot's internal cognitive algorithm, but is a result of a close interaction between body, brain and environment. Figure 1 outlines the methodology: the robot continuously executes three computational components. The first component (Figure 1B) uses the motor-sensor data pairs generated by the physical robot and a stochastic optimization process to synthesize a population of self-models that reflect the robot's best estimation of its own morphology. At the outset of an experiment the scarcity of data admits a wide range of possible models (four such hypothetical models are shown in Figure 1B). The second component then uses a second stochastic optimization process to search the space of possible actions that the physical robot could perform: actions are supplied to the current set of self-models, and the disagreement among the resulting motion of the self-models is used to determine the quality of a given action (Figure 1C). Once an action is found that induces sufficient disagreement among the models, that action is executed by the physical robot (Figure 1A), generating a new motor-sensor data pair. Models are then re-optimized against this enlarged training set (Figure 1B). This process continues until no further disagreement can be induced among the models or some other termination criterion is reached. At this point the most accurate model is used by a third stochastic optimization process to search for a behavior, or sequence of actions, that allow it to perform some desired task (Figure 1D). In the described work, this desired task was locomotion. Once the model indicates that a successful behavior has been found, that behavior is executed on the physical robot, allowing it to locomote (Figure 1E, F). One of the principal limitations of the current work is that the self-models are explicit representations of the robot. In future work it would be exciting to introduce neuromorphic hardware into a robot such that these explicit models could be replaced with implicit models in which the dynamics of the robot, and its interaction with the environment, are represented as dynamics within a neuromorphic circuit. This would be a first step to reconciling the two fields that recognize that morphology is a fundamental property of both biological and artificial intelligent agents. http://www.cs.uvm.edu/?jbongard/ References
|
Tell us what to cover!
If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.
|
||
|