Welcome to The Neuromorphic Engineer | ||||
|
Applications » Locomotion
Flight control in a flapping-wing fruit fly simulator PDF version | Permalink Flying a small autonomous aerial robot across the room without hitting a wall or lampshade is an example of an ‘information-rich’ control problem. Control system design and analysis tools are well suited to high-speed dynamics and small numbers of sensors. An example is a robot arm, which might only need a single angle sensor in each joint. But when the task is both dynamic and data-intensive—like our fast flying robot which might navigate using a video stream from a camera—the solution is not as clear. Technologies like radar, ladar, or global positioning systems (GPS) can help with avoiding obstacles, but they are too heavy or use too much power. And GPS does not even work indoors. The robot must carry lightweight, efficient sensors. What remains is a suite of sensors quite similar to what flies already use: cameras for vision (eyes); rate gyros (halteres, which are beating vestigial wings that sense rotation); anemometers for wind speed (antennae); and perhaps a microphone or a smell sensor (antannae). We know flies primarily use vision for navigation (most of their brains are devoted to that task), yet they are still quick and agile, so it seems reasonable to find out how they have solved the problem. Figure 1. The simulator environment with the fly (top) and the world as seen by the faceted view of the fly (bottom). Much has been learned about fly vision, aerodynamics, and so on by studying separate systems in isolation. But to understand how they all work together requires being able to close the feedback loop. Will Dickson and Andrew Straw have created a fruit fly simulator that models the fly's aerodynamics and vision in a virtual world so that controllers can be hypothesized and tested.1 The simulator's flapping wing kinematics are based on data from high-speed video sequences of flies in flight.2 Aerodynamic forces and moments are verified against a dynamically-scaled robotic model in a tow tank. Vision is rendered in 3D and projected onto a faceted visual sphere like the fly's eye (Figure 1). Here we stabilize forward flight using a controller that takes optic flow patterns as input and produces control commands that are perturbations of wing motions from baseline kinematics.3 Our approach is traditional for controls: a sensor estimates the state of the system, an error is calculated, and a control signal is produced. Figure 2. The response of the elementary motion detectors (EMDs) to the forward (top) and vertical (bottom) motion of the fly. Each arrow represents the response of the EMD, drawn along the line connecting adjacent visual elements. The front and rear hemispheres appear different in the forward velocity case because the arrows are generally pointing upward versus downward. Emulating what we know about the fly's visual processing, velocity estimation is performed using Hassenstein-Reichardt elementary motion detectors (EMDs) and experienced-derived matched filters. Behavioral studies on flies suggest that EMDs, which perform a delay-and-correlate operation between pairs of visual elements, measure optic flow.4 The velocity vector is estimated by correlating the current EMD response with the EMD response to known velocity vectors. These known response fields, known as matched filters, are shown in Figure 2 and resemble the optic flow sensitivity patterns observed in tangential cells in flies.5We focus on controlling the fly's longitudinal motion: forward speed, vertical speed and position, pitch angle, and pitch rate. Pitch torque is induced by by changing the mean stroke position and thrust is varied by changing stroke frequency. Forward motion is induced by pitching forward like a helicopter. Wing forces and moments are averaged over a complete wing stroke and linearized around an operating point of 0.25m/sec. A fast inner loop controls the pitch angle using a proportional-derivative (PD) controller and an outer integral controller controls forward velocity. Implemented in the simulator, the fly's response oscillates briefly before settling near the commanded steady-state forward velocity (see Figure 3) as desired. Figure 3. The step response of the fly's controller oscillates for about a second before stabilizing near the desired forward velocity. Controlling insect-sized robots through unknown environments presents a number of engineering challenges that will require new technology and finesse. Among these is how to extract useful information from the stream of visual information to control rapid dynamics of the robot in a computationally- and energy-efficient manner. Flies are case studies in how this can be accomplished, and they use the same sensors that might be technologically possible on flying robots. We've designed a controller that emulates what is known about flies and makes guesses where there are gaps. The controller uses internal estimates of tangible quantities such as ‘forward velocity,’ but the fly may not encode things in this way and future work will attempt to determine the validity of this assumption. Our controller is an example of how simulation enables sophisticated hypotheses to be generated and compared to actual fly behavior. Ultimately, we expect the effort to lead to understanding how the fly implements its controller on the neuronal substrate and an implementation as a low-power, parallel circuit on a flying robot. http://web.mit.edu/minster/www/ References
|
Tell us what to cover!
If you'd like to write an article or know of someone else who is doing relevant and interesting stuff, let us know. E-mail the and suggest the subject for the article and, if you're suggesting someone else's work, tell us their name, affiliation, and e-mail.
|
||
|