Grabbing, tracking and sniffing as models for motion detection and eye movements


Moving objects generate sensory information that may be noisy and ambiguous, yet it is important to be able to reconstruct object speed as fast as possible. One unsolved question is to understand how the brain pools motion information to give an efficient response. I will present computational models of motion detection in the visual system that try to answer this question. First, sensory information is grabbed by pooling the information from different sensory neurons. This pooling is modulated by the precision of information and we will present some recent model-based behavioural results. Then I will focus on a novel model of motion-based prediction that allows to track objects on smooth trajectories. This model gives an economical description of neural mechanisms associated with the processing underlying motion detection. Finally, we will propose an exploratory hypothesis such that eye movements may be understood as the prospective response to this dynamical sensory response knowing oculomotor constraints such as delays. This line of research aims at showing that through the convergent use of models, electrophysiology or behavioural responses, the study of motion detection is an essential tool in our understanding of neural computations.

Jan 27, 2012 12:00 AM
Brain meeting at FIL, London - Friday, January 27th, 2012
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.