Hugo Ladret focuses on predictive coding, an influential brain theory that promises to account for the many seemingly disparate results neuroscientists have gathered over decades of experiments. Using neurobiology with a theory-driven approach, his experimental work deals about vision, and to find theoretical insights for neural network modelling.
Building upon our previous work, we are investigating how recurrent neural networks learn to integrate temporal information, a dimension which is absent in most deep learning networks but provides a wealth of information in biological neural networks.
To be able to generalize our findings, I created a model of the early visual pathway (retina and thalamus) that generates neural activity from any natural image, based on data gathered in biological systems for the past several decades. The output from this early visual pathway is then processed by a recurrent spiking neural network whose dynamics match that of the primary visual cortex.
We showed that Spike Timing Dependant Plasticity (STDP) and recurrence are key components that allow spiking neural networks to extract patterns from noisy input and build strong internal representations. Such representations not only correctlt predict spatial informations (for example the organization of a visual scene) but also predict temporal structure underlying such informations.
I developed computational neuroscience and computational physics models, in collaboration with well-known contemporary artist Etienne Rey at Friche la Belle de Mai (Marseille) and AI researcher Laurent Perrinet. The idea behind our project was to create works of art by distributing particles in a constrained, semi-stable space, thereby creating discrete illusory perceptions.
To dive into more technical details, my work included the implementation of a Boltzmann lattice for computational fluid dynamics (D2Q9 structure), as well as various electro-magnetic interaction models. On the neuroscience side, I used Deep Convoluted Generative Adverserial Networks (DCGAN), Kohonen maps and Canny edge detectors to generate triangulated graphs with a hidden underlying structure. In order to facilitate collaboration between the three of us, I also developed a GUI and multi-threading support that allowed us to work efficiently and use at best each our respective skill set.
I created a ring model that performs orientation discrimination tasks, using an hybrid model of convolutionnal and recurrent networks. This work was, to our knowledge, the first visual ring model based on deep learning techniques.
The recurrence in the network plays a role akin to that of lateral interactions within the primary visual cortex. We have shown in this work that these lateral interactions provide robustness to noisy inputs in the model, which we infer to also be the the case in the brain. To verify this assessment, I designed a 2-outcome discriminative psychophysics task (2AFC) and compared various metrics for human and model trials. The results showed that the lateral interactions allowed human-like performance, which is a strong qualitative argument in favor of the biological plausiblity of this model.
Phd candidate in Computational Neuroscience, 2023
Master in Neuroscience, 2019