The brain is a complex machinery that is incredibly efficient and flexible. Thanks to efficient training processes, it tackles a high diversity of tasks with a high robustness. In contrast, states-of-the-art machine learning algorithms exhibit great performances, but are also highly task-specialized. Consequently, neuroscience is potentially a great source of inspiration to design more efficient artificial intelligence algorithms.
In particular, vision is predominant compared to other senses in terms of computational resources. So understanding visual processing has the potential to reveal the core computational mechanisms that give the brain such performances. In my research, I am interested in extracting the fundamental principles that are at stake in the visual system, and to apply them to develop better machine learning algorithms. As I consequence, I tend to adopt a cross-level analysis approach. I develop algorithms that simultaneously model low-level neural mechanisms and account for higher-level visual tasks such as object recognition, denoising, inpainting, image generation, etc.
The ultimate objective would be to develop a framework that successfully solves all these visual tasks without being extensively retrained from scratch for each of those tasks. Such an algorithm would be a first step towards the ultimate goal of every researcher in machine learning, that is, general artificial intelligence.
Robotics is a rapidly evolving technology that allows for fast, low-risk and low-cost tasks with a worldwide market of over 80 billion dollars over the next few years. In particular, aerial robots, also known as drones, provide a breakthrough to easily image and access all sorts of terrains and situations and are useful for instance in surveillance and forensics, emergency industrial inspection or a search and rescue operation. A major difficulty for their global acceptance is the difficulty for controlling their flight and interacting with them.
Indeed, aerial robots are generally operated using a (central) ground station which is not compatible with the time pressure required by emergency conditions, for instance when rescuing a person out of reach with the ground station. This PhD project aims at concealing such obstacles and construct an aerial robot which is able to be autonomously and interactively controlled by simple human gestures, for instance that of a rescuer. The main scientific challenges are (i) to embed in the aerial robot all the electronics for the visual system from the retina to the control signals to the propellers, (ii) to very quickly recognize a variety of simple gestures on-board using a neuromimetic architecture and (iii) to make the robot react in real time to these gestures. As such, this project is inter-disciplinary by positively combining advanced algorithms from event-based bio-inspired computer vision and the latest technology in aerial robots.
R. Benosman , S.-H. Leng , C. Clercq , C. Bartolozzi & M. Srinivasan (2012) “Asynchronous frameless event-based optical flow”, Neural Networks - Elsevier
S.-C. Liu & T. Delbruck (2010) “Neuromorphic sensory systems”, Current opinion in neurobiology - Elsevier
J. Nagi, A. Giusti, G. A. Di Caro, L. M. Gambardella (2014) “HRI in the Sky, Controlling UAVs using Face Poses and Hand Gestures”, HRI
The present PhD proposal is at the crossroad between various disciplines. It first concerns biology and neuroscience because its event-based approach is strongly inspired from the neuronal network observed in animals such as insects to primates and used for navigation, obstacle avoidance, and sensori-motor control. It is also covering electronics, aerial robotics and signal processing as the main project achievement is to create a working spike-based electronic architecture able to recognize body movement, and to use it to control the robot. Such an oucome will have beneficial outcomes with respects to the SRI-S3 regional strategy, in particular with respect to “risks, security and safety”.
This project is a partnership between two different doctoral schools based in Marseille: the EDSMH at ISM for the robotic part, and the EDSVS at INT concerning visual processing and spike-based processing methods. This partnership will provide the ESR with the best resources to achieve his goals. In particular, the ISM owns a brand new flying arena (funded by Robotex project, www.marseilles-flying-arena.eu) equipped with high-tech motion capture tools (Vicon) and the INT has a entire technological platform dedicated for high-performance computing and measurement tool prototyping.
Combining neuroscience and robotics to design novel electronic architectures is an innovative and a valuable approach in Robotics. The doctoral student selected for this project will acquire experience in bio-inspired hardware architectures, which is going to be valuable in his career as there is a need to adapt actual electronic architecture to for instance spike-based visual processing.
Phd in Computational Neuroscience, 2014