Coevolution of Control and Sensors

From Master Projects
Jump to: navigation, search

About Coevolution of Control and Sensors

  • This project has not yet been fulfilled.
  • This project fits in the following Bachelor programs: {{#arraymap:|, |xXx|bachelorproject within::xXx|,}}
  • This project fits in the following masterareas: {{#arraymap:Computational Intelligence and Selforganisation, Cognitive Science, Technical Artificial Intelligence|, |xXx|project within::xXx|,}}


Imagine a collection of small, relatively simple, autonomous robots that collectively have to perform various complex tasks. To achieve their goals, the robots can move about individually, but more importantly, they can physically attach to each other to form and manipulate multi-robot organisms for tasks that an unconnected group of individual robots cannot cope with. Think, for instance, of scaling a wall or holding a relatively large object. One of the advantages of a swarm of simpler robots is the increased robustness compared to complex monolithic systems: if a single robot fails, the swarm can pretty much carry on regardless. Also, the robots can reconfigure the organism to suit particular tasks and circumstances, something that large and complex individual robots would find impossible.

The SYMBRION/REPLICATOR projects focus on developing techniques to allow the robots to learn: to adapt their controllers to various tasks and circumstances. The projects aim to develop robot organisms that consists out of a multitude of concatenated cube-shaped 10x10x10 cm robots. Those cubic robots, or modules, allows us to design or evolve body forms like snakes, spiders, etc. With this diverse set of body types the robots can solve different types of problems (e.g. “wriggling through something soft”, “stepping over a hard obstacle”, “crawling through a pipe”). Not only that, the robot modules also contain many sensors: four cameras, two microphones, multiple infrared sensors. The overall goal in the project is that robots will be able to operate and adapt autonomously. They need to recharge themselves at power outlets, detect and approach each other visually or acoustically, and assemble to those organisms that enable them to reach the available power outlets.

Task description

One challenge for the robots is to choose the right set of sensors and sensor resolutions for a certain body form to be able to swiftly detect obstacles and objects, like power outlets, or trapped victims in the environment and move accordingly. The relevance of sensor input (e.g. from sensors directed to the ceiling) depends on their position on the robot organism. The simultaneous search for locomotion control and sensor layouts or resolutions for a given body-shape (or even a co-evolving one) can be performed by using evolutionary techniques. This has been done, for instance, using genetic regulatory networks (GRNs), reaction-diffusion controllers [1], neural networks [2] and HyperNEAT. Those approaches have most often been limited to infrared (distance) sensors. Our case is concerned with more complex sensors, namely cameras. How can we co-evolve the layout of complex sensors and locomotion control?

It will be your task during this master thesis project to research methods to co-evolve the sensor layout/resolution and controllers for locomotion of a robot organism. Of particular interest would be the possibilities of reaction-diffusion models and HyperNEAT, possibly extended to incorporate some measure of phenotypic diversity.

The experiments will be conducted in simulation using the physically realistic 3D simulator, Robot3D, with the scenario of the RoboCup Rescue league. This is an environment with slopes and debris of ever-increasing complexity denoted by the color of the arena. The modular robots are more or less the same size of the obstacles they encounter, so they need to move as an organism to be able to navigate through the arena.

The Robot3D simulator is based on the Delta3D engine. It is interfaced through YARP with an evolutionary framework developed at the VU. A graphical editor can be used to create a custom-made 3D environment. It contains the RoboCupRescue maps ( Software from the RobotCub/iCub project ( will be interfaced over the course of time with Replicator software.


This internship is a shared initiative of Almende B.V. and the Vrije Universiteit Amsterdam. Almende (2000), is a research company, situated in Rotterdam, which performs research in a diverse range of disciplines, building on principles of self-organization. This ranges from networks of people, represented by software agents, enabling self-organization in communication solutions, on-the-fly logistics planning; wireless sensor networks, and robotics, the subject of this internship. Daughter companies of Almende apply the research in commercial products. This internship will be part of the European FP7 project, Replicator. European partners develop the Replicator hardware and electronics, such as the Universität Stuttgart, Universität Graz, Universität Karlsruhe and Scuola Superiore Sant'Anna. Sensors are built by the Sheffield Hallam University, Fraunhofer Gesellshaft, Institut Mikroelektronickych Aplikaci, Ubisense and Ceske Vysoke Uceni Technicke v Praze.

Vrije Universiteit Amsterdam is one of the partners in the FP7 project, responsible for researching and developing evolutionary algorithms (EAs) for adaptive robot control. The research will be conducted under the auspices of the Computational Intelligence group (

Function requirements

A student in the master Artificial Intelligence, Knowledge Engineering, Robotics, Neuroscience, Cognitive Psychology, Synthetic Biology, or Electrical Engineering. Affinity with artificial intelligence is considered more important then experience with robotics. The company has international employees and lots of international partners, hence, fluent English is essential. It is not required to speak Dutch. Pros: knowledge about:

  • Artificial intelligence
  • Programming in C/C++

For further information, see, and