Brains are not like computers. Our brains can quickly and easily spot familiar objects, like keys in a messy room, with very little effort. In contrast, even the best computers struggle to do this as fast or efficiently. This difference shows just how much more we need to learn about how our brains work to create smarter artificial intelligence.
To bridge the gap between neuroscience and Artificial Intelligence (AI), I seek to harness the efficiency of vision by understanding how neural computations govern sensory processes like vision and behavioral responses like eye movements.
Laurent Perrinet is a computational neuroscientist specialized in large scale neural network models of low-level vision, perception and action, currently at the “Institut de Neurosciences de la Timone,” France (UMR 7289](https://www.wikidata.org/wiki/Q30261469)), a joint research unit (UMR7289, CNRS / Aix-Marseille Université). He co-authored more than 55 articles in computational neuroscience and computer vision. He graduated from the aeronautics engineering school SUPAERO, in Toulouse (France) with a signal processing and applied mathematics degree. He received a PhD in Cognitive Science in 2003 on the mathematical analysis of temporal spike coding of images by using a multiscale and adaptive representation of natural scenes. His research program is focusing in bridging the complex dynamics of realistic, large-scale models of spiking neurons with functional models of low-level vision. In particular, as part of the FACETS and BrainScaleS consortia, he has developed experimental protocols in collaboration with neurophysiologists to characterize the response of population of neurons. Recently, he extended models of visual processing in the framework of predictive processing in collaboration with the team of Karl Friston at the University College of London. This method aims at characterizing the processing of dynamical flow of information as an active inference process. His current challenge within the NeOpTo team is to translate, or compile in computer terminology, this mathematical formalism with the event-based nature of neural information with the aim of pushing forward the frontiers of Artificial Intelligence systems.
Habilitation à diriger des recherches, 2014
Aix-Marseille Université
PhD. in Cognitive Science, 2003
Université P. Sabatier, Toulouse, France
M.S. in Engineering, 1998
SupAéro, Toulouse, France
Recently, there has been an increase in interest in exploring the hypothesis that neural activity conveys information through precise spiking motifs. To investigate this phenomenon, several algorithms have been proposed to detect such motifs in Single Unit Activity recorded from populations of neurons. Based on the inversion of a generative model of raster plot synthesis, we present a novel detection model. This model derives an optimal detection procedure in the form of logistic regression combined with temporal convolution. Its differentiability allows for a supervised learning approach using gradient descent on the binary cross-entropy loss. To assess the model’s ability to detect spiking motifs in synthetic data, numerical evaluations are performed. This analysis emphasizes the benefits of utilizing spiking motifs instead of traditional firing rate-based population codes. Our learning method was able to successfully recover synthetically generated spiking motifs, indicating its potential for further applications. In the future, we aim to extend this method to real neurobiological data, where the ground truth is unknown, to explore and detect spiking motifs in a more natural and biologically relevant context.
Timing is essential for neural processing, but evidence for such temporal precision is still lacking. We have developed a theoretical model of representation based on spatio-temporal spiking motifs. Our goal is to develop a self-supervised learning method for optimal detection of such motifs in neurobiological data. To detect such motifs, we have extended the K-Means algorithm to process temporal data using a convolutional operator. A second pooling layer ensures that only one motif is used per time step. The results were improved by ensuring that the detected motifs are equiprobably activated using a homeostatic mechanism. We applied this algorithm to the Spiking Heidelberg database, which consists of the output of a realistic cochlear model to spoken digits. Qualitatively, the filters show a structure similar to the receptive fields found in the auditory cortex. Based on these promising results on this realistic yet synthetic dataset, future work will aim to apply his algorithm to neurological data to challenge the hypothesis of the role of precise spike timing in neural processes.
Event cameras asynchronously report brightness changes with a temporal resolution in the order of microseconds, which makes them inherently suitable to address problems that involve rapid motion perception, such as ventral landing and fast obstacle avoidance. These problems are typically addressed by estimating a single global time-to-contact (TTC) measure, which explicitly assumes that the surface/obstacle is planar and fronto-parallel. We relax this assumption by proposing an incremental event-based method to estimate the TTC that jointly estimates the (up-to scale) inverse depth and global motion using a single event camera. The proposed method is reliable and fast while asynchronously maintaining a TTC map (TTCM), which provides per-pixel TTC estimates. As a side product, the proposed method can also estimate per-event optical flow. We achieve state-of-the-art performances on TTC estimation in terms of accuracy and runtime per event while achieving competitive performance on optical flow estimation.
How to reach me