FACETS (2006/2010)

FACETS (2006/2010)

List of publications that were funded by the FACETS project (more info).

  • also available on the FACET’s website

See also:

Laurent U Perrinet
Researcher in Computational Neuroscience

My research interests include Machine Learning and computational neuroscience applied to Vision.


We held a CodeJam 22nd-24th June 2010, in Marseille.



Short presentation of a large moving pattern elicits an ocular following response that exhibits many of the properties attributed to low-level motion processing such as spatial and temporal integration, contrast gain control and divisive interaction between competing motions. Similar mechanisms have been demonstrated in V1 cortical activity in response to center-surround gratings patterns measured with real-time optical imaging in awake monkeys (see poster of Reynaud et al., VSS09). Based on a previously developed Bayesian framework, we have developed an optimal statistical decoder of such an observed cortical population activity as recorded by optical imaging. This model aims at characterizing the statistical dependance between early neuronal activity and ocular responses and its performance was analyzed by comparing this neuronal read-out and the actual motor responses on a trial-by-trial basis. First, we show that relative performance of the behavioral contrast response function is similar to the best estimate obtained from the neural activity. In particular, we show that the latency of ocular response increases with low contrast conditions as well as with noisier instances of the behavioral task as decoded by the model. Then, we investigate the temporal dynamics of both neuronal and motor responses and show how motion information as represented by the model is integrated in space to improve population decoding over time. Lastly, we explore how a surrounding velocity non congruous with the central excitation information shunts the ocular response and how it is topographically represented in the cortical activity. Acknowledgement: European integrated project FACETS IST-15879.

Moving the eyes rapidly to track a visual object moving in a cluttered environment is an essential function. However, doing so rapidly and efficiently is constrained by a number of noise sources in the visual system and by the fact that information is collected locally before giving raise to a global signal. After reviewing some results made in the modeling of low-level sensory areas, I will expose a method to decode low-level neural information as describing visual information using a probabilistic representation. Decisions will therefore correspond to statistical inferences which are dynamically resolving the veridical speed of a moving object. We will illustrate this method by showing how ambiguous local information can be merged to give raise to a global response which resolves the aperture problem. Using this theoretical approach \“in computo\“, we will illustrate how we may better understand results which are observed \“in vivo\” (optical imaging) as a neural code linking actively sensation and behavior.

The machinery behind the visual perception of motion and the subsequent sensorimotor transformation, such as in Ocular Following Response (OFR), is confronted to uncertainties which are efficiently resolved in the primate’s visual system. We may understand this response as an ideal observer in a probabilistic framework by using Bayesian theory (Weiss et al., 2002) which we previously proved to be successfully adapted to model the OFR for different levels of noise with full field gratings or with disk of various sizes and the effect of a flickering surround (Perrinet and Masson, 2007). More recent experiments of OFR have used disk gratings and bipartite stimuli which are optimized to study the dynamics of center-surround integration. We quantified two main characteristics of the global spatial integration of motion from an intermediate map of possible local translation velocities: (i) a finite optimal stimulus size for driving OFR, surrounded by an antagonistic modulation and (ii) a direction selective suppressive effect of the surround on the contrast gain control of the central stimuli (Barthelemy et al., 2006, 2007). Herein, we extended in the dynamical domain the ideal observer model to simulate the spatial integration of the different local motion cues within a probabilistic representation. We present analytical results which show that the hypothesis of independence of local measures can describe the initial segment of spatial integration of motion signal. Within this framework, we successfully accounted for the dynamical contrast gain control mechanisms observed in the behavioral data for center-surround stimuli. However, another inhibitory mechanism had to be added to account for suppressive effects of the surround. We explore here an hypothesis where this could be understood as the effect of a recurrent integration of information in the velocity map. F. Barthelemy, L. U. Perrinet, E. Castet, and G. S. Masson. Dynamics of distributed 1D and 2D motion representations for short-latency ocular following. Vision Research, 48(4):501–22, feb 2007. doi: 10.1016/j.visres.2007.10.020. F. V. Barthelemy, I. Vanzetta, and G. S. Masson. Behavioral receptive field for ocular following in humans: Dynamics of spatial summation and center-surround interactions. Journal of Neurophysiology, (95):3712–26, Mar 2006. doi: 10.1152/jn.00112.2006. L. U. Perrinet and G. S. Masson. Modeling spatial integration in the ocular following response using a probabilistic framework. Journal of Physiology (Paris), 2007. doi: 10.1016/j.jphysparis.2007.10.011. Y. Weiss, E. P. Simoncelli, and E. H. Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5(6):598–604, Jun 2002. doi: 10.1038/nn858.

Computational Neuroscience is a synthetic, inter-disciplinary approach aiming at understanding cognition by analyzing the mechanisms underlying neural computations. We present in this seminar our attempt in modeling low-level vision by bridging different integration levels, from neural spiking activity to behavior. At the behavioral level, the Ocular Following Response recorded in the laboratory reveals how the brain may integrate local information (moving images on visual receptive fields) to produce a single behavioral response (the movement of the eye). Using a probabilistic representation, we provide a simple integrative mechanism that gives the “ideal” response to possibly noisy and ambiguous information, similarly to a Bayesian approach. This fits well the performance revealed by behavioral data and may act as a generic cortical “module”. At the population level, these mechanisms may indeed be implemented for the coding of natural images and we will show the particular importance of spiking representations and lateral interactions for efficient and rapid responses. In particular, we will present an original unsupervised learning algorithm that we applied to a model of the primary visual cortex. Finally, at the neuronal level, I will present work done in the team showing how certain mechanisms at the level of the synapse and of the neuron are essential at the population level and how we may understand these mechanisms at the population level. This illustrates the importance of dynamical processes, distributed activity and recurrent connections to produce a cortical gain control mechanism. As a conclusion, this approach provides useful applications for image processing and possible valorization in future computer architectures. More generally, it proves that the use of a probabilistic representation is a particularly efficient method for bridging biological versus computational neuroscience and illustrates the advantage of such an interdisciplinary approach.