Elegant insect solutions can inspire Artificial Intelligence (AI) for tiny robots
Flying insects perform marvellous feats such as landing on a flower in gusty wind conditions. Moreover, they do this with very little processing – honeybees for instance only have ~960,000 neurons. Biologists have unveiled that flying insects rely heavily on optical flow for flight control. Optical flow captures the way in which objects move over an animal’s (or robot’s) image. For example, when we humans look out of a train, trees close-by will move very quickly over our image (large optical flow), while a mountain in the distance will move slowly (small optical flow).
Insects employ elegant optical flow strategies to solve difficult tasks. A telling example is that honeybees keep the optical flow constant in order to perform smooth landings. Since optical flow captures the ratio between the honeybee’s velocity and distance to the landing surface (v/d), the honeybee will go slower and slower as it approaches the landing surface. This strategy is elegant since it allows the honeybee to directly map optical flow to control commands, requiring very little processing. Moreover, it foregoes the need for extra sensors to measure distance, as are often used in human-engineered solutions.
I have always been fascinated by such elegant solutions, mainly because I would like to harness similar principles for creating tiny, autonomous robots. This is the reason that I started to study optical flow control in 2010. At that time, I was working for the European Space Agency, which was interested in creating ultra-lightweight, autonomous landing systems. Little did I know that my investigation into optical flow landing would reveal deeper, hidden layers to the above story.
Robotic experiments reveal fundamental problem of optical flow control
Although the strategy of keeping the optical flow constant sounds simple enough, it turns out that executing that strategy with a control law is no easy matter. When I implemented a commonly used control law to keep the optical flow constant during landing, the drone would always start to oscillate close to the landing surface, actually never landing at all!
It soon became apparent that directly using optical flow for control has a fundamental problem [1]. If the distance to the objects in view becomes really small, optical flow becomes huge (think of the formula v/d with d approaching 0). At such a small distance, a tiny change in velocity, e.g., from positive to negative, can then make the optical flow signal go from large positive to large negative values. This makes the optical flow control loop unstable. Since optical flow landing always has the drone continuously reduce the distance to the landing surface, it means that there will always be a point at which the drone will get unstable…
Perceiving distance by means of instabilities is important for successful optical flow control
At first, this realization seemed like quite a blow to the success of optical flow control. However, in [1] I performed a theoretical analysis of optical flow control, which showed that the distance at which instabilities start to arise is linearly related to the control gain used in the control loop. I then realized that if the drone was able to detect that its control was getting unstable, it would be able to perceive distance!
Why would it be useful for drones to perceive distances? Well, firstly, detecting self-induced oscillations for determining distance is very useful for landing. As soon as the drone starts to oscillate, it can detect it and trigger a final landing procedure. Secondly, the drone perceives distances not in terms of meters but in terms of a control gain that – when oscillating – is slightly too high. This means that the detection of self-induced oscillations forms the key to finding the right control gain for high performance optical flow control. We investigated this in [2], in which we had the drone first hover and then increase its gain till it oscillates. In this way, it can identify the optimal control gain for optical flow control. Subsequently, the drone can land quickly and smoothly while continuously lowering the gain.
Beyond oscillations and optical flow
So it turned out that for successful optical flow control distance perception is highly relevant – but then in the form of detecting control instabilities. Still, we couldn’t shake the feeling that the frequent need for oscillations would form a disadvantage for drones and insects alike.
In terms of landing, a drone would each time have to find the right gain for starting the landing, which is times taking and requires many oscillations. Thinking of actual honeybees, we know that they perceive more than optical flow alone; They can recognize complex visual patterns, colors, and textures when it comes to recognizing food sources. Hence, to us it seemed not unthinkable that after landing for the 10th time on the same kind of flower, a honeybee would actually be able to recognize the flower, evaluate its image size, and immediately set its control gains accordingly.
This is exactly what we investigated in [3]: We devised a self-supervised learning process, in which the drone learns to map the visual appearance of its environment to the distances it perceives in terms of control gains. For landing, this meant that the drone first landed a few times while oscillating, learning to associate the landing surface’s appearance to the oscillating control gain. After learning, the drone is able to immediately use the optimal control gain by means of the landing surface’s appearance – foregoing the need for oscillating. This led to faster, smoother optical flow landings than ever before.
Moreover, in [3] we developed a variant of the same learning process but then applied to seeing dense distances when looking ahead. In this case, the drone had to fly forward avoiding obstacles. During learning the drone tried to keep the vertical optical flow zero (in order to keep the same height). When approaching an obstacle, this control would lead to oscillations. When oscillating, the dense optical flow could be used to obtain a dense distance map of the 3D environment. After learning, this dense distance map – based on visual appearance alone – enables the drone to detect obstacles. This approach solved a second long-standing problem in optical flow control, namely that obstacle detection in the flight direction is very difficult: Flow is so small there that noise starts to dominate. In contrast, distance perception based on visual appearance works evenly well in any direction. The proposed self-supervised learning process enabled drones to fly faster and safer in the presence of obstacles.
Biological relevance
This research started out with observations from biology on how insects use optical flow. There is of course a big difference between studying an animal and a robot. Namely, the animal is already a fully working “system”, so if your hypothesis on its inner workings has some blanks, this will not affect the performance of the animal. In order to make a robot work, however, the researcher has to fill in all the blanks. This makes robots such interesting “animal models”. Implementing a biological hypothesis has the potential to reveal blind spots pertaining to the inner workings of animals.
The oscillation theory I proposed in [1] forms a parsimonious explanation for the following tough questions on optical flow landings by insects:
- How does a honeybee know that it has arrived at the landing spot if optical flow itself contains no information on distance? (Honeybees are known to hover consistently at a given distance of the landing platform) – Answer of the theory: By detecting oncoming control instability.
- Why do honeybees always go to hover just before landing? – Answer of the theory: This is not an active choice, but a side-effect of an oncoming control instability. The oscillations in optical flow naturally cause the honeybee to hover or even oscillate.
Additionally, the work in [2] suggests that honeybees may use oscillations to tune their optical flow control gains. Finally, the self-supervised learning process I proposed with my co-authors in [3] provides an interesting hypothesis on how honeybees can improve their flight performance over their lifetime. Typically, such improvements are attributed to reinforcement learning. However, reinforcement learning requires many (typically in the 1,000s) trials for learning, while self-supervised learning only requires a few. Indeed, in our landing experiments in [3], we only needed ~3 landings to learn a new landing surface. This is much closer to the kind of quick learning observed in honeybees.
Currently, I am working with biologists in order to verify the hypotheses that derive from this work.
Below, I list our articles up until now on this topic, and give a short summary per article.
1. Monocular distance estimation with optical flow maneuvers and efference copies: a stability-based strategy, GCHE de Croon, Bioinspiration & biomimetics 11 (1), 016004. (2016).
In this article, I forwarded that optical flow control always leads to instability when the distance becomes small enough. I made two main contributions in this article.
Firstly, I proved theoretically that the commonly used discretized, linearized P-control system becomes stable at a distance that is linearly related to the control gain P. This was confirmed both by simulated and real-world drone experiments. Despite these systems being nonlinear and more complex than the theoretical model, the relation between the gain and the distance at which instabilities started to arise was still approximately linear.
Secondly, I came up with the idea that the drone could be able to detect the self-induced oscillations that precede control instability. In this manner, it would be able to perceive distance by means of active optical flow control! This was shown to work for an optical flow landing with a fixed gain, which is useful for triggering landing. It was also shown in hover, in which case the drone tried to keep the optical flow zero (to achieve hover), while adapting the control gain P to fly at the “edge of oscillation”.
2. Adaptive gain control strategy for constant optical flow divergence landing. HW Ho, GCHE de Croon, E Van Kampen, QP Chu, and M Mulder, IEEE Transactions on Robotics 34 (2), 508-516. (2018).
One may wonder why it would actually be useful for a drone (or insect for that matter) to know distance. In this article, we showed that the utility lies in high-performance optical flow control. Whereas the distance expressed in meters may have little relevance to a drone, a distance expressed in a control gain that is slightly too high (leading to oscillations) is highly useful; Setting the control gain slightly lower leads to a high-performance, stable control gain for the optical flow control loop!
Hence, we proposed to start an optical flow landing by first hovering while increasing the gain till the point of oscillation. Subsequently, we would slightly lower the gain, while starting the optical flow landing. Of course, as was evident from the first article, the drone would have to lower the gain as it was going down. For this we used the knowledge that a well-executed constant optical flow divergence landing has the height decreasing exponentially. We had the gain follow the same curve over time, so that the drone could perform fast, smooth landings, and know when it had arrived (the gain value approaching zero at that point).
3. Enhancing optical-flow-based control by learning visual appearance cues for flying robots. GCHE de Croon, C De Wagter, and T Seidl, Nature Machine Intelligence 3 (1). (2021)
The previous article left the impression that oscillations will always remain necessary for high-performance optical flow control. In terms of landing, a drone would each time have to find the right gain for starting the landing, which is times taking and requires many oscillations. Thinking of actual honeybees, we know that they perceive more than optical flow alone; They can recognize complex visual patterns, colors, and textures when it comes to recognizing food sources. Hence, to us it seemed not unthinkable that after landing for the 10th time on the same kind of flower, a honeybee would actually be able to recognize the flower, evaluate its image size, and immediately set its control gains accordingly.
This is exactly what we tested in this article. We had drones perform optical flow control in an environment. Initially, this requires the drone to oscillate. Each time the drone oscillated, though, it would look at its environment and associate the visual appearance of its environment with the perceived distances. After learning, the drones were able to control their flight much better. For optical flow landing, this allowed drones to immediately start a fast, smooth optical flow landing. For obstacle avoidance, it allowed drones to recognize obstacles straight ahead and speed up.
4. Stability-based scale estimation for monocular SLAM. SH Lee, and GCHE de Croon, IEEE Robotics and Automation Letters 3 (2), 780-787. (2018).
In this article, we showed that the theory from article 1 is also relevant to monocular SLAM. In particular, we used the adaptation of the control gain to scale the map in SLAM. Whereas in optical flow the optimal control gain has to be found continuously, SLAM keeps track of world points in sight, which means that the drone needs to oscillate much less frequently (in the ideal world only once). The results of our scaling method were rather accurate, comparable to other methods that use additional sensors for scaling. Interestingly, our scaling was less accurate when objects were far away, as instability arises sooner in such scenes. In our eyes, this actually means that monocular SLAM systems should scale their gains adaptively, depending on scene depth. For control, it is better to scale the map in our manner, for accurate metric reconstruction, additional sensors are to be preferred.
5. Optical-flow-based Stabilization of Micro Air Vehicles Without Scaling Sensors. TI Braber, C De Wagter, GCHE de Croon, and R Babuska, 10th International Micro-Air Vehicles Conference, (2018).
In this article, we generalized the results from articles 1 and 2 to three axis control. A drone would take off, adjust the gain to oscillate, and then set the control gains for all axes accordingly. The drone was able to hover and fly without scaling sensors.
6. Distance and velocity estimation using optical flow from a monocular camera. HW Ho, GCHE de Croon, and Q Chu, International Journal of Micro Air Vehicles 9 (3), 198-208
This article is actually an outlier, since it does not use the oscillation theory from article 1 to perceive distances. Instead, it uses a model of the drone’s thrust to scale optical flow. This worked very well, even in outdoor conditions.