Decoding low-level neural information to track visual motion

Abstract

Moving the eyes rapidly to track a visual object moving in a cluttered environment is an essential function. However, doing so rapidly and efficiently is constrained by a number of noise sources in the visual system and by the fact that information is collected locally before giving raise to a global signal. After reviewing some results made in the modeling of low-level sensory areas, I will expose a method to decode low-level neural information as describing visual information using a probabilistic representation. Decisions will therefore correspond to statistical inferences which are dynamically resolving the veridical speed of a moving object. We will illustrate this method by showing how ambiguous local information can be merged to give raise to a global response which resolves the aperture problem. Using this theoretical approach \“in computo\“, we will illustrate how we may better understand results which are observed \“in vivo\” (optical imaging) as a neural code linking actively sensation and behavior.

Date
Apr 1, 2009 12:00 AM
Avatar
Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.