Projects

DelFly

DelFly in Cyberzoo

The DelFly is a bio-inspired drone that flies by flapping its wings. Flapping wing drones are currently much less well understood than rotorcraft or fixed wing drones. A main reason for this is that the aerodynamic principles underlying the generation of lift and thrust are very different – flapping wing drones rely on unsteady aerodynamic effects in order to fly. In the DelFly project, we study various aspects of flapping wings, studying the unsteady aerodynamics, investigating a control model that maps control commands to subsequent motion dynamics (system identification), exploring different designs and materials, and creating an artificial intelligence that will make these light-weight drones fly completely by themselves.

 

 

 

Videos

Selected publications:
(2014), De Wagter, C., Tijmons, S., Remes, B.D.W., and de Croon, G.C.H.E., “Autonomous Flight of a 20-gram Flapping Wing MAV with a 4-gram Onboard Stereo Vision System”, at the 2014 IEEE International Conference on Robotics and Automation (ICRA 2014).(draft – pdf)
(2012), de Croon, G.C.H.E., M.A. Groen, De Wagter, C., Remes, B.D.W., Ruijsink, R., and van Oudheusden, B.W. “Design, Aerodynamics, and Autonomy of the DelFly”, in Bioinspiration and Biomimetics, Volume 7, Issue 2.(draft – pdf) (Bibtex)

Swarms

swarming

The main objective of this project is to study how a swarm of small “pocket” drones can explore an unstructured, indoor environment (e.g., disaster areas). The pocket drones are small quadrotors, with a weight of ~20 grams and a diameter of ~10 cm. Their low weight and relatively low speed makes the pocket drones inherently safe for people, while the small size allows them to navigate even in very narrow indoor spaces.

The fundamental scientific challenge in this project derives from the small size of the pocket drones. It entails strict limitations in onboard energy, sensing, and processing, ruling out state-of-the-art approaches to autonomous control and navigation. Instead, an efficient, nature-inspired solution is required that should cover: (1) low-level navigation such as obstacle avoidance or flying through narrow corridors, (2) high level navigation allowing to reach places of interest, and (3) coordination with the other drones to explore the environment efficiently.

This project is financed by the Dutch Science Foundation (NWO). Please have a look at the project web site for the latest news.

Optical Flow

Optical Flow Control

Roboticists look with envy at small, flying insects, such as honeybees or fruit flies. These small animals perform marvelous feats such as landing on a flower in gusty wind conditions. Biologists have unveiled that flying insects rely heavily on optical flow for such tasks. Optical flow captures the way in which objects move over an animal’s (or robot’s) image. For instance, when looking out of a train, trees close-by will move very quickly over the image (large optical flow), while a mountain in the distance will move slowly (small optical flow). Insects are believed to follow very straightforward optical flow control strategies. For instance, when landing they keep the flow and the expansion of the flow constant, so that they gradually slow down as they descend. Successfully introducing similar strategies in spacecraft or drones holds a huge potential, since optical flow can be measured with tiny, energy efficient sensors and the straightforward control strategies hardly require any computation.

My goal is not just to transfer findings from biology to robotics, but also to generate new hypotheses on how insects navigate in their environment. In particular, I investigate the algorithms involved in:
Optical flow determination: On a “normal” processor available on larger drones (500g) a standard optical flow algorithm can be run (e.g., FAST corner detection plus Lucas-Kanade optical flow tracking). On the tiny processors aboard our pocket drones (50g), new algorithms have to be developed in order to get a satisfying update frequency.
Flow field interpretation: Given the optical flow in the image, how can the robot extract useful information such as the flow divergence (expansion), the slope of the surface it is flying over, or the flatness of the terrain beneath the robot? Different approaches are possible here with different accuracies and amounts of processing.
Control: I analyze the nonlinear optical flow control laws theoretically in order to find better control strategies.

A central belief regarding optical flow control, both in biology and in robotics, is that it does not require the robot to know the height. The reason for this belief is that optical flow as a visual cue does not provide information on height, but on the ratio of velocity divided by height. In contrast, I argue that height still plays an important role in optical flow control. Recently, I have analyzed constant optical flow divergence landing and uncovered that robots that do not change their “control gains” (the strenght of their reactions) will always start to oscillate at a specific height above the landing surface. This self-induced control instability at first sight seems troublesome, but can actually be detected by the robot. It turns out that this allows the robot to know its height! Even more so, for high-performance optical flow control, the robot should adapt its control gains to the height all along the landing. This finding is not only relevant to robots, but also to biology, generating novel hypotheses on how they perform optical flow control. More information can be found here.

Selected publications:
(2016), de Croon, G.C.H.E., Monocular distance estimation with optical flow maneuvers and efference copies: a stability based
strategy
, in Bioinspiration and Biomimetics, vol. 11, number 1. (pdf)
(2015), de Croon, G.C.H.E., Alazard, D., Izzo, D., “Controlling spacecraft landings with constantly and exponentially decreasing time-to-contact”, IEEE Transactions on Aerospace and Electronic Systems, April 2015, 51(2), pages 1241 – 1252, (original).
(2013), de Croon, G.C.H.E., and Ho, H.W., and De Wagter, C., and van Kampen, E., and Remes B., and Chu, Q.P., “Optic-flow based slope estimation for autonomous landing”, in the International Journal of Micro Air Vehicles, Volume 5, Number 4, pages 287 – 297. (draft – pdf) (Bibtex)

Self-supervised learning

Self-learning robot on the International Space Station

SPHERES robot navigating with self-supervised learning

Learning is very important for animals, and it is commonly accepted that it will be very important to future robots as well. However, most current autonomous robots are completely pre-programmed. There are multiple reasons for this. For example, reinforcement learning methods work by trial-and-error, using the reward signals from the different trials as learning signals. The problem is that typically many trials are necessary for learning (in the order of thousands), something which is undesirable on most real robots. Definitely, robots such as the DelFly would not be able to learn in this way. A different learning method is imitation learning, in which a robot typically learns from how a human performs a task. This in turn requires quite some effort on the part of the human supervisor. Wouldn’t it be great if a robot could learn completely by itself?

In self-supervised learning (SSL), a robot teaches itself how to do something it already knows. This sounds strange of course, because why would a robot want to learn something it already knows? An example may help here. The picture shows the SPHERES VERTIGO robot, which is equipped with a stereo system (two cameras), with which it can see distances. The robot uses the stereo vision distances to teach itself to also see distances with a single, still image. After learning, it can then also navigate around the International Space Station with one camera – just in case things go really wrong. We performed the experiment with the SPHERES robot together with ESA, MIT, and NASA on the International Space Station in 2015, making it the first ever learning robot in space!

The last few years, I have studied different case studies, showing how SSL can extend a robot’s capabilities without risky trial-and-error learning. Learning is typically very fast, as it is supervised and as it can build on ample amounts of data coming from the robot’s sensors. This is especially important for data-hungry methods such as deep neural networks.

Selected publications:

(2016), van Hecke, K., de Croon, G.C.H.E., Hennes, D., Setterfield, T. P., Saenz-Otero, A., & Izzo, D. (2016). Self-supervised learning as an enabling technology for future space exploration robots: ISS experiments. At the International Astronautical Congress (IAC 2016).

(2016), Lamers, K., Tijmons, S., De Wagter, C., and de Croon, G.C.H.E., Self-Supervised Monocular Distance Learning on a Lightweight Micro Air Vehicle, at IROS 2016.

(2015), H.W. Ho, C. De Wagter, B.D.W. Remes, and G.C.H.E. de Croon, “Optical flow for self-supervised learning of obstacle appearance”, in the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015)

 

Evolutionary Robotics

project_evolution

Small animals like insects perform complex tasks by linking together simple behaviors in a smart manner. For instance, male moths are able to find female moths by means of their odor. In turbulent air, this odor source finding problem is extremely challenging. The male moth successfully finds the female by (roughly) employing the following straightforward strategy. If it does not smell the female’s scent, it performs a “casting” maneuver. This implies flying to and from, perpendicular to the wind. If it does capture the scent, the moth flies up-wind. It does so until it looses the scent, after which it starts casting again.

The smart behavioral routines exhibited by animals may not be suitable for a robot, which has different sensing and actuation capabilities. Still, I would like robots to follow similarly smart, efficient behavioral strategies. To this end, I study the use of evolutionary robotics, in which robot controllers are not designed by the roboticist, but designed by an artificial evolutionary process. Some of my work is tuned to specific applications, e.g., how can a robot find an odor source? I also focus on improving the methodology of evolutionary robotics, and then especially on ways in which evolved controllers can be successfully employed by real robots (crossing the gap between simulation and the real world) and increasing the complexity of tasks that can be tackled with evolutionary robotics.

Videos:

Selected publications:
(accepted), Scheper, K.Y.W., Tijmons, S., de Visser, C.C., and de Croon, G.C.H.E., “Behaviour Trees for Evolutionary Robotics”, Artificial Life. (draft)
(2013), de Croon, G.C.H.E., O’Connor, L.M., Nicol, C., Izzo, D., “Evolutionary robotics approach to odor source localization”, in Neurocomputing, Volume 121, 9 December 2013, Pages 481–497 (draft – pdf) (Bibtex)